code
stringlengths 114
1.05M
| path
stringlengths 3
312
| quality_prob
float64 0.5
0.99
| learning_prob
float64 0.2
1
| filename
stringlengths 3
168
| kind
stringclasses 1
value |
|---|---|---|---|---|---|
defmodule Remedy.Schema.Component do
@moduledoc """
Components are a framework for adding interactive elements to the messages your app or bot sends. They're accessible, customizable, and easy to use. There are several different types of components; this documentation will outline the basics of this new framework and each example.+
> Components have been broken out into individual modules for easy distinction between them and to separate helper functions and individual type checking between component types - especially as more components are added by Discord.
Each of the components are provided all of the valid types through this module to avoid repetition and allow new components to be added quicker and easier.
## Action Row
An Action Row is a non-interactive container component for other types of components. It has a `type: 1` and a sub-array of `components` of other types.
- You can have up to 5 Action Rows per message
- An Action Row cannot contain another Action Row
- An Action Row containing buttons cannot also contain a select menu
## Buttons
Buttons are interactive components that render on messages. They have a `type: 2`, They can be clicked by users. Buttons in Nostrum are further separated into two types, detailed below. Only the [Interaction Button](#module-interaction-buttons-non-link-buttons) will fire a `Nostrum.Struct.Interaction` when pressed.

- Buttons must exist inside an Action Row
- An Action Row can contain up to 5 buttons
- An Action Row containing buttons cannot also contain a select menu
For more information check out the [Discord API Button Styles](https://discord.com/developers/docs/interactions/message-components#button-object-button-styles) for more information.
## Link Buttons
- Link buttons **do not** send an `interaction` to your app when clicked
- Link buttons **must** have a `url`, and **cannot** have a `custom_id`
- Link buttons will **always** use `style: 5`
#### Link `style: 5`

## Interaction Buttons ( Non-link Buttons )
> Discord calls these buttons "Non-link Buttons" due to the fact that they do not contain a url. However it would be more accurate to call them an "Interaction Button" as they **do** fire an interaction when clicked which is far more useful for your applications interactivity. As such they are referred to as "Interaction Button" throughout the rest of this module.
- Interaction buttons **must** have a `custom_id`, and **cannot** have a `url`
- Can have one of the below `:style` applied.
#### Primary `style: 1`

#### Secondary `style: 2`

#### Success `style: 3`

#### Danger `style: 4`

## 🐼 ~~Emoji Buttons~~
> Note: The discord documentation and marketing material in relation to buttons indicates that there are three kinds of buttons: 🐼 **Emoji Buttons**, **Link Buttons** & **Non-Link Buttons**. When in fact all buttons can contain an emoji. Because of this reason 🐼 **Emoji Buttons** are not included as a seperate type. Emojis will be instead handled by the two included ( superior ) button types.

> The field requirements are already becoming convoluted especially considering everything so far is all still a "Component". Using the sub types and helper functions will ensure all of the rules are followed when creating components.
## Select Menu
Select menus are another interactive component that renders on messages. On desktop, clicking on a select menu opens a dropdown-style UI; on mobile, tapping a select menu opens up a half-sheet with the options.

Select menus support single-select and multi-select behavior, meaning you can prompt a user to choose just one item from a list, or multiple. When a user finishes making their choice by clicking out of the dropdown or closing the half-sheet, your app will receive an interaction.
- Select menus **must** be sent inside an Action Row
- An Action Row can contain **only one** select menu
- An Action Row containing a select menu **cannot** also contain buttons
"""
defmacro __using__(_opts) do
quote do
alias Remedy.Schema.Component.{ActionRow, Button, Option, SelectMenu}
alias Remedy.Schema.{Component, Emoji}
@before_compile Component
end
end
defmacro __before_compile__(_env) do
quote do
alias Remedy.Schema.Component
defp new(opts \\ []) do
@defaults
|> to_component(opts)
end
defp update(%Component{} = component, opts \\ []) do
component
|> Map.from_struct()
|> to_component(opts)
end
defp to_component(component_map, opts) do
opts
|> Enum.reject(fn {_, v} -> v == nil end)
|> Enum.into(component_map)
|> Enum.filter(fn {k, _} -> k in allowed_keys() end)
|> Enum.into(%{})
|> flatten()
|> Component.new()
end
defp allowed_keys, do: Map.keys(@defaults)
## Destroy all structs and ensure nested map
def flatten(map), do: :maps.map(&do_flatten/2, map)
defp do_flatten(_key, value), do: enm(value)
defp enm(list) when is_list(list), do: Enum.map(list, &enm/1)
defp enm(%{__struct__: _} = strct), do: :maps.map(&do_flatten/2, Map.from_struct(strct))
defp enm(data), do: data
end
end
@doc """
Create a component from the given keyword list of options
> Note: While using this function directly, you are not guaranteed to produce a valid component and it is the responsibility of the user to ensure they are passing a valid combination of component attributes. eg. if you pass a button component both a `custom_id`, and a `url`, the component is invalid as only one of these fields is allowed.
"""
@callback new(opts :: [keyword()]) :: t()
@doc """
Updates a component with the parameters provided.
> Note: While using this function directly, you are not guaranteed to produce a valid component and it is the responsibility of the user to ensure they are passing a valid combination of component attributes. eg. if you pass a button component both a `custom_id`, and a `url`, the component is invalid as only one of these fields is allowed.
"""
@callback update(t(), opts :: [keyword()]) :: t()
alias Remedy.Schema.Component.{ActionRow, Button, SelectMenu, ComponentOption}
use Remedy.Schema
embedded_schema do
field :type, ComponentType
field :custom_id, :string
field :disabled, :boolean
field :style, :integer
field :label, :string
field :url, :string
field :placeholder, :string
field :min_values, :integer
field :max_values, :integer
embeds_one :emoji, Emoji
embeds_many :options, ComponentOption
embeds_many :components, Component
end
@typedoc """
The currently valid component types.
"""
@type t :: ActionRow.t() | Button.t() | SelectMenu.t()
@typedoc """
The type of component.
Valid for All Types.
| | Component Types |
|------|-----|
| `1` | Action Row |
| `2` | Button |
| `3` | SelectMenu |
Check out the [Discord API Message Component Types](https://discord.com/developers/docs/interactions/message-components#component-object-component-types) for more information.
"""
@type type :: integer()
@typedoc """
Used to identify the command when the interraction is sent to you from the user.
Valid for [Interaction Buttons](#module-interaction-button) & [Select Menus](#module-select-menu).
"""
@type custom_id :: String.t() | nil
@typedoc """
Indicates if the component is disabled or not.
Valid for [Buttons](#module-buttons) & [Select Menus](#module-select-menu).
"""
@type disabled :: boolean() | nil
@typedoc """
Indicates the style.
Valid for Valid for [Interaction Buttons](#module-interaction-button) only,
"""
@type style :: integer() | nil
@typedoc """
A string that appears on the button, max 80 characters.
Valid for [Buttons](#module-buttons)
"""
@type label :: String.t() | nil
@typedoc """
A partial emoji to display on the object.
Valid for [Buttons](#module-buttons)
"""
@type emoji :: Emoji.t() | nil
@typedoc """
A url for link buttons.
Valid for: [Buttons](#module-buttons)
"""
@type url :: String.t() | nil
@typedoc """
A list of options for select menus, max 25.
Valid for [Select Menus](#module-select-menu).
"""
@type options :: [ComponentOption.t()] | nil
@typedoc """
Placeholder text if nothing is selected, max 100 characters
Valid for [Select Menus](#module-select-menu).
"""
@type placeholder :: String.t() | nil
@typedoc """
The minimum number of permitted selections. Minimum value 0, max 25.
Valid for [Select Menus](#module-select-menu).
"""
@type min_values :: integer() | nil
@typedoc """
The maximum number of permitted selections. Minimum value 0, max 25.
Valid for [Select Menus](#module-select-menu).
"""
@type max_values :: integer() | nil
@typedoc """
A list of components to place inside an action row.
Due to constraints of action rows, this can either be a list of up to five buttons, or a single select menu.
Valid for [Action Row](#module-action-row).
"""
@type components :: [SelectMenu.t() | Button.t() | nil]
def changeset(model \\ %__MODULE__{}, params) do
fields = __MODULE__.__schema__(:fields)
embeds = __MODULE__.__schema__(:embeds)
cast_model = cast(model, params, fields -- embeds)
Enum.reduce(embeds, cast_model, fn embed, cast_model ->
cast_embed(cast_model, embed)
end)
end
end
|
components/component.ex
| 0.873512
| 0.734715
|
component.ex
|
starcoder
|
defmodule Rolodex.Headers do
@moduledoc """
Exposes functions and macros for defining reusable headers in route doc
annotations or responses.
It exposes the following macros, which when used together will set up the headers:
- `headers/2` - for declaring the headers
- `header/3` - for declaring a single header for the set
It also exposes the following functions:
- `is_headers_module?/1` - determines if the provided item is a module that has
defined a reusable headers set
- `to_map/1` - serializes the headers module into a map
"""
alias Rolodex.{DSL, Field}
defmacro __using__(_) do
quote do
use Rolodex.DSL
import Rolodex.Headers, only: :macros
end
end
@doc """
Opens up the headers definition for the current module. Will name the headers
set and generate a list of header fields based on the macro calls within.
**Accepts**
- `name` - the headers name
- `block` - headers shape definition
## Example
defmodule SimpleHeaders do
use Rolodex.Headers
headers "SimpleHeaders" do
field "X-Rate-Limited", :boolean
field "X-Per-Page", :integer, desc: "Number of items in the response"
end
end
"""
defmacro headers(name, do: block) do
quote do
unquote(block)
def __headers__(:name), do: unquote(name)
def __headers__(:headers), do: Map.new(@headers, fn {id, opts} -> {id, Field.new(opts)} end)
end
end
@doc """
Sets a header field.
**Accepts**
- `identifier` - the header name
- `type` - the header field type
- `opts` (optional) - additional metadata. See `Field.new/1` for a list of
valid options.
"""
defmacro field(identifier, type, opts \\ []) do
DSL.set_field(:headers, identifier, type, opts)
end
@doc """
Determines if an arbitrary item is a module that has defined a reusable headers
set via `Rolodex.Headers` macros
## Example
defmodule SimpleHeaders do
...> use Rolodex.Headers
...> headers "SimpleHeaders" do
...> field "X-Rate-Limited", :boolean
...> end
...> end
iex>
# Validating a headers module
Rolodex.Headers.is_headers_module?(SimpleHeaders)
true
iex> # Validating some other module
iex> Rolodex.Headers.is_headers_module?(OtherModule)
false
"""
@spec is_headers_module?(any()) :: boolean()
def is_headers_module?(mod), do: DSL.is_module_of_type?(mod, :__headers__)
@doc """
Serializes the `Rolodex.Headers` metadata into a formatted map
## Example
iex> defmodule SimpleHeaders do
...> use Rolodex.Headers
...>
...> headers "SimpleHeaders" do
...> field "X-Rate-Limited", :boolean
...> field "X-Per-Page", :integer, desc: "Number of items in the response"
...> end
...> end
iex>
iex> Rolodex.Headers.to_map(SimpleHeaders)
%{
"X-Per-Page" => %{desc: "Number of items in the response", type: :integer},
"X-Rate-Limited" => %{type: :boolean}
}
"""
@spec to_map(module()) :: map()
def to_map(mod), do: mod.__headers__(:headers)
end
|
lib/rolodex/headers.ex
| 0.873674
| 0.534612
|
headers.ex
|
starcoder
|
defmodule Exchange do
@moduledoc """
The best Elixir Exchange supporting limit and market orders. Restful API and fancy dashboard supported soon!
"""
@doc """
Places an order on the Exchange
## Parameters
- order_params: Map that represents the parameters of the order to be placed
- ticker: Atom that represents on which market the order should be placed
"""
@spec place_order(order_params :: map(), ticker :: atom()) :: atom() | {atom(), String.t()}
def place_order(order_params, ticker) do
order_params = Map.put(order_params, :ticker, ticker)
case Exchange.Validations.cast_order(order_params) do
{:ok, limit_order} ->
Exchange.MatchingEngine.place_order(ticker, limit_order)
{:error, errors} ->
{:error, errors}
end
end
@doc """
Cancels an order on the Exchange
## Parameters
- order_id: String that represents the id of the order to cancel
- ticker: Atom that represents on which market the order should be canceled
"""
@spec cancel_order(order_id :: String.t(), ticker :: atom) :: atom
def cancel_order(order_id, ticker) do
Exchange.MatchingEngine.cancel_order(ticker, order_id)
end
# Level 1 Market Data
@doc """
Returns the difference between the lowest sell order and the highest buy order
## Parameters
- ticker: Atom that represents on which market the order should be canceled
"""
@spec spread(ticker :: atom) :: {atom, Money}
def spread(ticker) do
Exchange.MatchingEngine.spread(ticker)
end
@doc """
Returns the highest price of all buy orders
## Parameters
- ticker: Atom that represents on which market the query should be placed
"""
@spec highest_bid_price(ticker :: atom) :: {atom, Money}
def highest_bid_price(ticker) do
Exchange.MatchingEngine.bid_max(ticker)
end
@doc """
Returns the sum of all active buy order's size
## Parameters
- ticker: Atom that represents on which market the query should be placed
"""
@spec highest_bid_volume(ticker :: atom) :: {atom, number}
def highest_bid_volume(ticker) do
Exchange.MatchingEngine.bid_volume(ticker)
end
@doc """
Returns the lowest price of all sell orders
## Parameters
- ticker: Atom that represents on which market the query should be placed
"""
@spec lowest_ask_price(ticker :: atom) :: {atom, Money}
def lowest_ask_price(ticker) do
Exchange.MatchingEngine.ask_min(ticker)
end
@doc """
Returns the sum of all active sell order's size
## Parameters
- ticker: Atom that represents on which market the query should be placed
"""
@spec highest_ask_volume(ticker :: atom) :: {atom, number}
def highest_ask_volume(ticker) do
Exchange.MatchingEngine.ask_volume(ticker)
end
@doc """
Returns a list of all active orders
## Parameters
- ticker: Atom that represents on which market the query should be placed
"""
@spec open_orders(ticker :: atom) :: {atom, list}
def open_orders(ticker) do
Exchange.MatchingEngine.open_orders(ticker)
end
@doc """
Returns an order by id.
## Parameters
- ticker: Atom that represents on which market the query should be placed
- order_id: String that represents the id of the order to cancel
"""
@spec open_orders_by_id(ticker :: atom, order_id :: String.t()) ::
{atom, Exchange.Order.order()}
def open_orders_by_id(ticker, order_id) do
Exchange.MatchingEngine.open_order_by_id(ticker, order_id)
end
@doc """
Returns a list of active orders placed by the trader
## Parameters
- ticker: Atom that represents on which market the query should be made
- trader_id: String that represents the id of the traderd
"""
@spec open_orders_by_trader(ticker :: atom, trader_id :: String.t()) :: {atom, list}
def open_orders_by_trader(ticker, trader_id) do
Exchange.MatchingEngine.open_orders_by_trader(ticker, trader_id)
end
@doc """
Returns the lastest price from a side of an Exchange
## Parameters
- ticker: Exchange identifier
- side: Atom to decide which side of the book is used
"""
@spec last_price(ticker :: atom, side :: atom) :: {atom, number}
def last_price(ticker, side) do
Exchange.MatchingEngine.last_price(ticker, side)
end
@doc """
Returns the lastest size from a side of an Exchange
## Parameters
- ticker: Exchange identifier
- side: Atom to decide which side of the book is used
"""
@spec last_size(ticker :: atom, ticker :: atom) :: {atom, number}
def last_size(ticker, side) do
Exchange.MatchingEngine.last_size(ticker, side)
end
@doc """
Returns a list of completed trades where the trader is one of the participants
## Parameters
- ticker: Atom that represents on which market the query should be made
- trader_id: String that represents the id of the trader
"""
@spec completed_trades_by_id(ticker :: atom, trader_id :: String.t() | atom()) :: [
Exchange.Trade
]
def completed_trades_by_id(ticker, trader_id) when is_atom(trader_id) do
completed_trades_by_id(ticker, Atom.to_string(trader_id))
end
def completed_trades_by_id(ticker, trader_id) do
Exchange.Utils.fetch_completed_trades(ticker, trader_id)
end
@doc """
Returns the number of active buy orders
## Parameters
- ticker: Atom that represents on which market the query should made
"""
@spec total_buy_orders(ticker :: atom) :: {atom, number}
def total_buy_orders(ticker) do
Exchange.MatchingEngine.total_bid_orders(ticker)
end
@doc """
Returns the number of active sell orders
## Parameters
- ticker: Atom that represents on which market the query should made
"""
@spec total_sell_orders(ticker :: atom) :: {atom, number}
def total_sell_orders(ticker) do
Exchange.MatchingEngine.total_ask_orders(ticker)
end
@doc """
Returns all the completed trades
## Parameters
- ticker: Atom that represents on which market the query should made
"""
@spec completed_trades(ticker :: atom) :: list
def completed_trades(ticker) do
Exchange.Utils.fetch_all_completed_trades(ticker)
end
@doc """
Returns the trade with trade_id
## Parameters
- ticker: Atom that represents on which market the query should made
- trade_id: Id of the requested trade
"""
@spec completed_trade_by_trade_id(ticker :: atom, trade_id :: String.t()) :: Exchange.Trade.t()
def completed_trade_by_trade_id(ticker, trade_id) do
Exchange.Utils.fetch_completed_trade_by_trade_id(ticker, trade_id)
end
end
|
lib/exchange.ex
| 0.898819
| 0.732879
|
exchange.ex
|
starcoder
|
defmodule Janus do
@moduledoc """
Core public API for `Janus`.
There are two foundational components this graph query language
is built upon: fully namespaced property names, and resolving
functions with specified inputs and outputs (i.e. resolvers).
## Fully Namespaced Property Names
Let's look at an example to get a glimpse of the significance:
%{
id: 123,
name: "<NAME>",
address: "456 Lambda Ln"
}
From this map, it *could* be difficult to understand to what
entity (or logical group) the properties `:id`, `:name`, and
`:address` belong to. Because of the specific values, we can
intuit that it's likely a person, but is it a customer, or an
employee? Something else? Hard to tell...
customers: [
%{
id: 123,
name: "<NAME>",
address: "456 Lambda Ln"
}
]
Much more obvious now, but what if we're talking about multiple
organizations, regions, offerings, etc. There is still
ambiguity...
And this is where fully namespaced properties come in:
%{
{Company.Sales.Customer, :id} => 123,
{Company.Sales.Customer, :name} => "<NAME>",
{Company.Sales.Customer, :address} => "456 Lambda Ln"
}
Now, even without knowing the values, or the associated grouped
properties the meaning/context of `{Company.Sales.Customer, :id}`
is understood, purely by it's name.
In Elixir we'll be using `{module, atom}` tuples as that seems
more idiomatic (similar to mfa tuples).
See [Clojure's Keywords](https://clojuredocs.org/clojure.core/keyword),
for an environment that supports fully namespaced keywords/atoms
natively.
## Resolvers
Resolvers are functions bundled with a list of fully namespaced
input properties, and fully namespaced output properties. The
expectation of the function being that: given the named inputs,
it will return the outputs. Similiar to the following:
%{
name: {Foo, :get_by_id},
input: [{Foo, :id}],
output: [
{Foo, :bar},
{Foo, :baz}
],
function: fn _env, %{id: id} ->
{bar, baz} = get_foo_by_id(id)
%{
{Foo, :bar} => bar,
{Foo, :baz} => baz
}
end
}
Resolvers connect properties/attributes. If properties are the
nodes, then resolvers are the edges of a directed graph,
drawing the edges from the input nodes to point at their output
nodes.
Resolvers and namespaced properties are all you need to
construct a graph-based query system similiar to
[GraphQL](https://graphql.org/), but without the overly
restrictive nature of a type system.
"""
use Boundary, deps: [Digraph, EQL, Interceptor, Rails], exports: []
@type env :: map
@type attr :: EQL.AST.Prop.expr()
@type shape_descriptor(x) :: %{optional(x) => shape_descriptor(x)}
@type shape_descriptor :: shape_descriptor(attr)
@type response_form(x) :: %{optional(x) => [response_form(x)] | any}
@type response_form :: response_form(attr)
end
|
lib/janus.ex
| 0.871591
| 0.739446
|
janus.ex
|
starcoder
|
defmodule Animu.Media.Anime.Video do
@moduledoc """
Stores video metadata from ffprobe plus the location of the file.
Should be immutable after initial generation.
"""
use Animu.Ecto.Schema
alias __MODULE__
@derive Jason.Encoder
embedded_schema do
field :filename, :string
field :dir, :string, default: "videos"
field :extension, :string
field :path, :string
field :format, :string
field :format_name, :string
field :duration, :decimal
field :start_time, :decimal
field :size, :integer
field :bit_rate, :integer
field :probe_score, :integer
field :original, :string
field :thumbnail, {:map, :string}
embeds_one :video_track, VideoTrack, on_replace: :delete do
field :index, :integer
field :codec_name, :string
field :coded_width, :integer
field :coded_height, :integer
field :width, :integer
field :height, :integer
field :pix_fmt, :string
field :bit_rate, :integer
field :profile, :string
field :nb_frames, :integer
field :avg_frame_rate, :string
field :start_time, :decimal
field :duration, :decimal
field :bits_per_raw_sample, :integer
field :display_aspect_ratio, :string
end
embeds_one :audio_track, AudioTrack, on_replace: :delete do
field :index, :integer
field :codec_name, :string
field :language, :string
field :bit_rate, :integer
field :bits_per_sample, :integer
field :max_bit_rate, :integer
field :sample_rate, :integer
field :sample_fmt, :string
field :profile, :string
field :nb_frames, :integer
field :channel_layout, :string
field :channels, :integer
field :start_time, :decimal
field :duration, :decimal
end
embeds_one :subtitles, Subtitles, on_replace: :delete do
field :type, :string, default: "ass"
field :filename, :string
field :dir, :string
field :fonts, {:array, :string}
field :font_dir, :string
end
end
require Protocol
Protocol.derive(Jason.Encoder, Video.VideoTrack)
Protocol.derive(Jason.Encoder, Video.AudioTrack)
Protocol.derive(Jason.Encoder, Video.Subtitles)
def changeset(%Video{} = video, attrs) do
video
|> cast(attrs, all_fields(Video, except: [:video_track, :audio_track, :subtitles]))
|> validate_required([:filename])
|> cast_embed(:video_track, with: &video_track_changeset/2)
|> cast_embed(:audio_track, with: &audio_track_changeset/2)
|> cast_embed(:subtitles, with: &subtitles_changeset/2)
|> update_path
end
def change(%Video{} = video) do
changeset(video, %{})
end
def video_track_changeset(%Video.VideoTrack{} = video_codec, attrs) do
video_codec
|> cast(attrs, all_fields(Video.VideoTrack))
end
def audio_track_changeset(%Video.AudioTrack{} = audio_codec, attrs) do
audio_codec
|> cast(attrs, all_fields(Video.AudioTrack))
end
def subtitles_changeset(%Video.Subtitles{} = subtitles, attrs) do
subtitles
|> cast(attrs, all_fields(Video.Subtitles))
end
def update_path(ch) do
case ch.valid? do
true ->
dir = get_field(ch, :dir)
name = get_field(ch, :filename)
path = Path.join(dir, name)
update_change(ch, :path, path)
_ -> ch
end
end
defdelegate new(golem, video_path, anime_dir), to: Video.Invoke
end
|
lib/animu/media/anime/video.ex
| 0.754599
| 0.415017
|
video.ex
|
starcoder
|
defmodule StarWars.GraphQL.DB do
@moduledoc """
DB is a "in memory" database implemented with Elixir.Agent to support the [Relay Star Wars example](https://github.com/relayjs/relay-examples/blob/master/star-wars)
NOTICE: in the original example the format of the data is id => name where name is a string.
"""
@initial_state %{
ship: %{
"1" => %{id: "1", name: "X-Wing", type: :star_wars_ship},
"2" => %{id: "2", name: "Y-Wing", type: :star_wars_ship},
"3" => %{id: "3", name: "A-Wing", type: :star_wars_ship},
"4" => %{id: "4", name: "Millenium Falcon", type: :star_wars_ship},
"5" => %{id: "5", name: "Home One", type: :star_wars_ship},
"6" => %{id: "6", name: "TIE Fighter", type: :star_wars_ship},
"7" => %{id: "7", name: "TIE Interceptor", type: :star_wars_ship},
"8" => %{id: "8", name: "Executor", type: :star_wars_ship}
},
faction: %{
"1" => %{
id: "1",
name: "Alliance to Restore the Republic",
ships: ["1", "2", "3", "4", "5"]
},
"2" => %{
id: "2",
name: "Galactic Empire",
ships: ["6", "7", "8"]
}
}
}
@doc """
Initialize a DB process (Agent)
"""
def start_link do
Agent.start_link(fn -> @initial_state end, name: __MODULE__)
end
def stop do
Agent.stop(__MODULE__)
end
def get(type, id) do
case Agent.get(__MODULE__, &get_in(&1, [type, id])) do
nil ->
{:error, "No #{type} with ID #{id}"}
result ->
{:ok, result}
end
end
@doc """
Create a new Ship and assign it to Faction identified by faction_id
NOTICE: this function is not "concurrent" safe because there is no "lock" on "next_ship_id"
and also is doesn't take care of referential integrity.
"""
def create_ship(ship_name, faction_id) do
next_ship_id = Agent.get(__MODULE__, fn data ->
Map.keys(data[:ship])
|> Enum.map(fn id -> String.to_integer(id) end)
|> Enum.sort
|> List.last
|> Kernel.+(1) # there's a deprecation msg when trying to pipe to +
|> Integer.to_string
end)
ship_data = %{id: next_ship_id, name: ship_name, type: :star_wars_ship}
case Agent.update(__MODULE__, &put_in(&1, [:ship, next_ship_id], ship_data)) do
nil ->
{:error, "Could not create ship"}
:ok ->
faction_ships = Agent.get(__MODULE__, &Map.get(&1, :faction))[faction_id][:ships]
faction_ships = faction_ships ++ [next_ship_id]
Agent.update(__MODULE__, &put_in(&1, [:faction, faction_id, :ships], faction_ships))
{:ok, ship_data}
end
end
def dump_db do
Agent.get(__MODULE__, fn state -> state end)
end
def get_factions(names) do
factions = Agent.get(__MODULE__, &Map.get(&1, :faction)) |> Map.values
Enum.map(names, fn name ->
factions
|> Enum.find(&(&1.name == name))
end)
end
def get_faction(id) do
Agent.get(__MODULE__, &get_in(&1, [:faction, id]))
end
end
|
apps/star_wars/graphql/db.ex
| 0.778986
| 0.404625
|
db.ex
|
starcoder
|
defmodule Flawless.Spec do
@moduledoc """
A structure for defining the spec of a schema element.
The `for` attribute allows to define type-specific specs.
"""
defstruct checks: [],
late_checks: [],
type: :any,
cast_from: [],
nil: :default,
on_error: nil,
for: nil
@type t() :: %__MODULE__{
checks: list(Flawless.Rule.t()),
late_checks: list(Flawless.Rule.t()),
type: atom(),
cast_from: list(atom()) | atom(),
nil: :default | true | false,
on_error: binary() | nil,
for:
Flawless.Spec.Value.t()
| Flawless.Spec.Struct.t()
| Flawless.Spec.List.t()
| Flawless.Spec.Tuple.t()
| Flawless.Spec.Literal.t()
}
defmodule Value do
@moduledoc """
Represents a simple value or a map.
The `schema` field is used when the value is a map, and is `nil` otherwise.
"""
defstruct schema: nil
@type t() :: %__MODULE__{
schema: map() | nil
}
end
defmodule Struct do
@moduledoc """
Represents a struct.
"""
defstruct schema: nil,
module: nil
@type t() :: %__MODULE__{
schema: map() | nil,
module: atom()
}
end
defmodule List do
@moduledoc """
Represents a list of elements.
Each element must conform to the `item_type` definition.
"""
defstruct item_type: nil
@type t() :: %__MODULE__{
item_type: Flawless.spec_type()
}
end
defmodule Tuple do
@moduledoc """
Represents a tuple.
Matching values are expected to be a tuple with the same
size as elem_types, and matching the rule for each element.
"""
defstruct elem_types: nil
@type t() :: %__MODULE__{
elem_types: {Flawless.spec_type()}
}
end
defmodule Literal do
@moduledoc """
Represents a literal constant.
Matching values are expected to be strictly equal to the value.
"""
defstruct value: nil
@type t() :: %__MODULE__{
value: any()
}
end
end
|
lib/flawless/spec.ex
| 0.899296
| 0.670072
|
spec.ex
|
starcoder
|
defmodule Timex.DateFormat do
@moduledoc """
Date formatting and parsing.
This module provides an interface and core implementation for converting date
values into strings (formatting) or the other way around (parsing) according
to the specified template.
Multiple template formats are supported, each one provided by a separate
module. One can also implement custom formatters for use with this module.
"""
alias Timex.DateTime
alias Timex.Format.DateTime.Formatter
alias Timex.Format.DateTime.Formatters.Strftime
alias Timex.Parse.DateTime.Parser
alias Timex.Parse.DateTime.Tokenizers.Strftime, as: StrftimeTokenizer
@doc """
Converts date values to strings according to the given template (aka format string).
"""
@spec format(%DateTime{}, String.t) :: {:ok, String.t} | {:error, String.t}
defdelegate format(%DateTime{} = date, format_string), to: Formatter
@doc """
Same as `format/2`, but takes a custom formatter.
"""
@spec format(%DateTime{}, String.t, :default | :strftime | atom()) :: {:ok, String.t} | {:error, String.t}
def format(%DateTime{} = date, format_string, formatter) when is_binary(format_string) do
case formatter do
:default -> Formatter.format(date, format_string)
:strftime -> Formatter.format(date, format_string, Strftime)
_ -> Formatter.format(date, format_string, formatter)
end
end
@doc """
Raising version of `format/2`. Returns a string with formatted date or raises a `FormatError`.
"""
@spec format!(%DateTime{}, String.t) :: String.t | no_return
defdelegate format!(%DateTime{} = date, format_string), to: Formatter
@doc """
Raising version of `format/3`. Returns a string with formatted date or raises a `FormatError`.
"""
@spec format!(%DateTime{}, String.t, atom) :: String.t | no_return
def format!(%DateTime{} = date, format_string, :default),
do: Formatter.format!(date, format_string)
def format!(%DateTime{} = date, format_string, :strftime),
do: Formatter.format!(date, format_string, Strftime)
defdelegate format!(%DateTime{} = date, format_string, formatter), to: Formatter
@doc """
Parses the date encoded in `string` according to the template.
"""
@spec parse(String.t, String.t) :: {:ok, %DateTime{}} | {:error, term}
defdelegate parse(date_string, format_string), to: Parser
@doc """
Parses the date encoded in `string` according to the template by using the
provided formatter.
"""
@spec parse(String.t, String.t, atom) :: {:ok, %DateTime{}} | {:error, term}
def parse(date_string, format_string, :default), do: Parser.parse(date_string, format_string)
def parse(date_string, format_string, :strftime), do: Parser.parse(date_string, format_string, StrftimeTokenizer)
defdelegate parse(date_string, format_string, parser), to: Parser
@doc """
Raising version of `parse/2`. Returns a DateTime struct, or raises a `ParseError`.
"""
@spec parse!(String.t, String.t) :: %DateTime{} | no_return
defdelegate parse!(date_string, format_string), to: Parser
@doc """
Raising version of `parse/3`. Returns a DateTime struct, or raises a `ParseError`.
"""
@spec parse!(String.t, String.t, atom) :: %DateTime{} | no_return
def parse!(date_string, format_string, :default), do: Parser.parse!(date_string, format_string)
def parse!(date_string, format_string, :strftime), do: Parser.parse!(date_string, format_string, StrftimeTokenizer)
defdelegate parse!(date_string, format_string, parser), to: Parser
@doc """
Verifies the validity of the given format string according to the provided
formatter, defaults to the Default formatter if one is not provided.
Returns `:ok` if the format string is clean, `{ :error, <reason> }` otherwise.
"""
@spec validate(String.t) :: :ok | {:error, term}
@spec validate(String.t, atom) :: :ok | {:error, term}
def validate(format_string, formatter \\ nil) do
case formatter do
f when f in [:default, nil] ->
Formatter.validate(format_string)
:strftime -> Formatter.validate(format_string, StrftimeTokenizer)
_ -> Formatter.validate(format_string, formatter)
end
end
end
|
lib/date/date_format.ex
| 0.919706
| 0.608943
|
date_format.ex
|
starcoder
|
defmodule OAuth2TokenManager.Store.Local do
@default_cleanup_interval 15
@moduledoc """
Simple token store using ETS and DETS
Access tokens are stored in an ETS, since they can easily be renewed with an access
token. Refresh tokens and claims are stored in DETS.
This implementation is probably not suited for production, firstly because it's not
distributed.
Since the ETS table must be owned by a process and a cleanup process must be
implemented to delete expired tokens, this implementation must be started under a
supervision tree. It implements the `child_spec/1` and `start_link/1` functions (from
`GenServer`).
The DETS read and write in the following tables:
- `"Elixir.OAuth2TokenManager.Store.Local.RefreshToken"` for refresh tokens
- `"Elixir.OAuth2TokenManager.Store.Local.Claims"` for claims and ID tokens
## Options
- `:cleanup_interval`: the interval between cleanups of the underlying ETS and DETS table in
seconds. Defaults to #{@default_cleanup_interval}
## Starting this implementation
In your `MyApp.Application` module, add:
children = [
OAuth2TokenManager.Store.Local
]
or
children = [
{OAuth2TokenManager.Store.Local, cleanup_interval: 30}
]
"""
@behaviour OAuth2TokenManager.Store
use GenServer
alias OAuth2TokenManager.Store
defmodule InsertError do
defexception [:reason]
@impl true
def message(%{reason: reason}), do: "insert failed with reason: #{inspect(reason)}"
end
defmodule MultipleResultsError do
defexception message: "illegal return of multiples entries"
end
def start_link(opts) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@impl GenServer
def init(opts) do
:dets.open_file(rt_tab(), [])
:ets.new(at_tab(), [:public, :named_table, {:read_concurrency, true}])
:dets.open_file(claim_tab(), [])
schedule_cleanup(opts)
{:ok, opts}
end
@impl GenServer
def handle_info(:cleanup, state) do
cleanup_access_tokens()
cleanup_refresh_tokens()
schedule_cleanup(state)
{:noreply, state}
end
defp cleanup_access_tokens() do
match_spec = [
{
{:_, :_, :_, %{"exp" => :"$1"}},
[{:<, :"$1", now()}],
[:"$1"]
}
]
:ets.select_delete(at_tab(), match_spec)
end
defp cleanup_refresh_tokens() do
match_spec = [
{
{:_, :_, %{"exp" => :"$1"}},
[{:<, :"$1", now()}],
[:"$1"]
}
]
:dets.select_delete(rt_tab(), match_spec)
end
@impl Store
def get_access_token(at) do
case :ets.lookup(at_tab(), at) do
[{at, _issuer, token_type, at_metadata, updated_at}] ->
if OAuth2TokenManager.token_valid?(at_metadata) do
{:ok, {at, token_type, at_metadata, updated_at}}
else
delete_access_token(at)
{:ok, nil}
end
[] ->
{:ok, nil}
[_ | _] ->
{:error, %MultipleResultsError{}}
end
end
@impl Store
def get_access_tokens_for_subject(iss, sub) do
match_spec = [
{
{:"$1", :"$2", :_, %{"sub" => :"$3"}, :_},
[{:==, :"$2", iss}, {:==, :"$3", sub}],
[:"$1"]
}
]
result =
:ets.select(at_tab(), match_spec)
|> Enum.reduce(
[],
fn at, acc ->
case get_access_token(at) do
{:ok, {^at, token_type, at_metadata, updated_at}} ->
[{at, token_type, at_metadata, updated_at} | acc]
_ ->
acc
end
end
)
|> Enum.filter(&OAuth2TokenManager.token_valid?/1)
{:ok, result}
rescue
e ->
{:error, e}
end
@impl Store
def get_access_tokens_client_credentials(iss, client_id) do
match_spec = [
{
{:"$1", :"$2", :_, %{"client_id" => :"$3"}, :_},
[{:==, :"$2", iss}, {:==, :"$3", client_id}],
[:"$1"]
}
]
result =
:ets.select(at_tab(), match_spec)
|> Enum.reduce(
[],
fn at, acc ->
case get_access_token(at) do
{:ok, {^at, _token_type, %{"sub" => _}, _updated_at}} ->
acc
{:ok, {^at, token_type, at_metadata, updated_at}} ->
[{at, token_type, at_metadata, updated_at} | acc]
_ ->
acc
end
end
)
|> Enum.filter(&OAuth2TokenManager.token_valid?/1)
{:ok, result}
rescue
e ->
{:error, e}
end
@impl Store
def put_access_token(at, token_type, at_metadata, iss) do
:ets.insert(at_tab(), {at, iss, token_type, at_metadata, now()})
{:ok, at_metadata}
rescue
e ->
{:error, e}
end
@impl Store
def delete_access_token(at) do
:ets.delete(at_tab(), at)
:ok
rescue
e ->
{:error, e}
end
@impl Store
def get_refresh_token(rt) do
case :dets.lookup(rt_tab(), rt) do
[{rt, _issuer, rt_metadata, updated_at}] ->
{:ok, {rt, rt_metadata, updated_at}}
[] ->
{:ok, nil}
[_ | _] ->
{:error, %MultipleResultsError{}}
end
end
@impl Store
def get_refresh_tokens_for_subject(iss, sub) do
match_spec = [
{
{:"$1", :"$2", %{"sub" => :"$3"}, :_},
[{:==, :"$2", iss}, {:==, :"$3", sub}],
[:"$1"]
}
]
result =
:dets.select(rt_tab(), match_spec)
|> Enum.reduce(
[],
fn rt, acc ->
case get_refresh_token(rt) do
{:ok, {^rt, rt_metadata, updated_at}} ->
[{rt, rt_metadata, updated_at} | acc]
_ ->
acc
end
end
)
|> Enum.filter(&OAuth2TokenManager.token_valid?/1)
{:ok, result}
rescue
e ->
{:error, e}
end
@impl Store
def get_refresh_tokens_client_credentials(iss, client_id) do
match_spec = [
{
{:"$1", :"$2", %{"client_id" => :"$3"}, :_},
[{:==, :"$2", iss}, {:==, :"$3", client_id}],
[:"$1"]
}
]
result =
:dets.select(rt_tab(), match_spec)
|> Enum.reduce(
[],
fn rt, acc ->
case get_refresh_token(rt) do
{:ok, {^rt, %{"sub" => _}, _updated_at}} ->
acc
{:ok, {^rt, rt_metadata, updated_at}} ->
[{rt, rt_metadata, updated_at} | acc]
_ ->
acc
end
end
)
|> Enum.filter(&OAuth2TokenManager.token_valid?/1)
{:ok, result}
rescue
e ->
{:error, e}
end
@impl Store
def put_refresh_token(rt, rt_metadata, iss) do
:dets.insert(rt_tab(), {rt, iss, rt_metadata, now()})
{:ok, rt_metadata}
rescue
e ->
{:error, e}
end
@impl Store
def delete_refresh_token(rt) do
:dets.delete(rt_tab(), rt)
:ok
rescue
e ->
{:error, e}
end
@impl Store
def get_claims(iss, sub) do
case :dets.lookup(claim_tab(), {iss, sub}) do
[{{_iss, _sub}, _id_token, claims_or_nil, updated_at_or_nil}] ->
{:ok, {claims_or_nil, updated_at_or_nil}}
[] ->
{:ok, nil}
[_ | _] ->
{:error, %MultipleResultsError{}}
end
end
@impl Store
def put_claims(iss, sub, claims) do
entry =
case get_id_token(iss, sub) do
{:ok, <<_::binary>> = id_token} ->
{{iss, sub}, id_token, claims, now()}
{:ok, nil} ->
{{iss, sub}, nil, claims, now()}
end
case :dets.insert(claim_tab(), entry) do
:ok ->
:ok
{:error, reason} ->
{:error, %InsertError{reason: reason}}
end
end
@impl Store
def get_id_token(iss, sub) do
case :dets.lookup(claim_tab(), {iss, sub}) do
[{{_iss, _sub}, id_token_or_nil, _claims_or_nil, _updated_at_or_nil}] ->
{:ok, id_token_or_nil}
[] ->
{:ok, nil}
[_ | _] ->
{:error, %MultipleResultsError{}}
end
end
@impl Store
def put_id_token(iss, sub, id_token) do
entry =
case get_claims(iss, sub) do
{:ok, {claims_or_nil, updated_at_or_nil}} ->
{{iss, sub}, id_token, claims_or_nil, updated_at_or_nil}
{:ok, nil} ->
{{iss, sub}, id_token, nil, nil}
end
case :dets.insert(claim_tab(), entry) do
:ok ->
:ok
{:error, reason} ->
{:error, %InsertError{reason: reason}}
end
end
defp at_tab(), do: Module.concat(__MODULE__, AccessToken)
defp rt_tab(), do: Module.concat(__MODULE__, RefreshToken) |> :erlang.atom_to_list()
defp claim_tab(), do: Module.concat(__MODULE__, Claims) |> :erlang.atom_to_list()
defp now, do: System.system_time(:second)
defp schedule_cleanup(state) do
interval = (state[:cleanup_interval] || @default_cleanup_interval) * 1000
Process.send_after(self(), :cleanup, interval)
end
end
|
lib/oauth2_token_manager/store/local.ex
| 0.777638
| 0.54468
|
local.ex
|
starcoder
|
defmodule Day16 do
def part1(input) do
{rules, _, nearby} = parse(input)
rules = flatten_rules(rules)
nearby
|> List.flatten()
|> Enum.filter(fn field ->
not valid_field?(field, rules)
end)
|> Enum.sum
end
def part2(input) do
{rules, yours, nearby} = parse(input)
nearby = discard_invalid_tickets(rules, nearby)
all_tickets = [yours | nearby]
# Figure out the zero-based index for each field.
{indices,_} = rules
|> Enum.map(fn {name, rules} ->
{name, find_index_candidates(rules, all_tickets)}
end)
|> Enum.sort_by(fn {_name, set} -> MapSet.size(set) end)
|> Enum.map_reduce(MapSet.new(), fn {name, candidates}, seen ->
[index] = MapSet.difference(candidates, seen)
|> MapSet.to_list()
{{name, index}, MapSet.put(seen, index)}
end)
# Retrieve the values for the departure fields and multiply them.
indices
|> Enum.filter(fn {name, _index} ->
String.starts_with?(name, "departure")
end)
|> Enum.map(fn {_, index} ->
Enum.at(yours, index)
end)
|> Enum.reduce(1, &*/2)
end
defp find_index_candidates(rules, tickets) do
candidates = MapSet.new(0..length(hd(tickets))-1)
Enum.reduce(tickets, candidates, fn ticket, acc ->
Enum.with_index(ticket)
|> Enum.reduce(acc, fn {field, index}, acc ->
case valid_field?(field, rules) do
true -> acc
false -> MapSet.delete(acc, index)
end
end)
end)
end
defp discard_invalid_tickets(rules, nearby) do
rules = flatten_rules(rules)
Enum.filter(nearby, fn ticket -> valid?(ticket, rules) end)
end
defp valid?(ticket, rules) do
Enum.all?(ticket, fn field ->
valid_field?(field, rules)
end)
end
defp valid_field?(field, rules) do
Enum.any?(rules, fn range ->
field in range
end)
end
defp flatten_rules(rules) do
Enum.flat_map(rules, &(elem(&1, 1)))
end
defp parse(input) do
{first, next} = Enum.split_while(input, fn line ->
line != "your ticket:"
end)
{yours, nearby} = Enum.split_while(next, fn line ->
line != "nearby tickets:"
end)
rules = TicketParser.parse_rules(first)
yours = parse_ticket(hd(tl(yours)))
nearby = Enum.map(tl(nearby), &parse_ticket/1)
{rules, yours, nearby}
end
defp parse_ticket(fields) do
String.split(fields, ",") |> Enum.map(&String.to_integer/1)
end
end
defmodule TicketParser do
import NimbleParsec
defp reduce_range([min, max]) do
min..max
end
defp reduce_rule([description, range1, range2]) do
{description, [range1, range2]}
end
range = integer(min: 1)
|> ignore(string("-"))
|> integer(min: 1)
|> reduce({:reduce_range, []})
rule_def = ascii_string([?a..?z,?\s], min: 1)
|> ignore(string(": "))
|> concat(range)
|> ignore(string(" or "))
|> concat(range)
|> eos()
|> reduce({:reduce_rule, []})
defparsecp :rule, rule_def
def parse_rules(input) do
Enum.map(input, fn(line) ->
{:ok, [res], _, _, _, _} = rule(line)
res
end)
end
end
|
day16/lib/day16.ex
| 0.624408
| 0.532243
|
day16.ex
|
starcoder
|
defmodule Similarity.Cosine do
@moduledoc """
A struct that can be used to accumulate ids & attributes and calcuate similarity between them.
"""
alias Similarity.Cosine
defstruct attributes_counter: 0, attributes_map: %{}, map: %{}
@doc """
Returns a new `%Cosine{}` struct to be first used with `add/3` function
"""
def new, do: %Cosine{}
@doc """
Puts a new id with attributes into `%Cosine{}.map` and returns `%Cosine{}` struct.
## Example:
s = Similarity.Cosine.new
s = s |> Similarity.Cosine.add("barna", [{"n_of_bacons", 3}, {"hair_color_r", 124}, {"hair_color_g", 188}, {"hair_color_b", 11}])
"""
def add(struct = %Cosine{map: map}, id, attributes) do
struct = %Cosine{attributes_map: attributes_map} = add_attributes(struct, attributes)
transformed_attributes =
attributes |> Enum.map(fn {key, value} -> {Map.get(attributes_map, key), value} end)
new_map = map |> Map.put(id, transformed_attributes)
%Cosine{struct | map: new_map}
end
@doc """
Returns `Similarity.cosine_srol/2` similarity between two pairs of ids (id_a, id_b) in `%Cosine{}`
## Example:
s = Similarity.Cosine.new
s = s |> Similarity.Cosine.add("barna", [{"n_of_bacons", 3}, {"hair_color_r", 124}, {"hair_color_g", 188}, {"hair_color_b", 11}])
s = s |> Similarity.Cosine.add("somebody", [{"n_of_bacons", 0}, {"hair_color_r", 222}, {"hair_color_g", 62}, {"hair_color_b", 11}])
s |> Similarity.Cosine.between("barna", "somebody")
"""
def between(%Cosine{map: map}, id_a, id_b) do
do_between(map, id_a, id_b)
end
defp do_between(map, id_a, id_b) do
attributes_a = map |> Map.get(id_a)
attributes_b = map |> Map.get(id_b)
keys_a = attributes_a |> Enum.map(fn {k, _v} -> k end) |> MapSet.new()
keys_b = attributes_b |> Enum.map(fn {k, _v} -> k end) |> MapSet.new()
common_attributes_keys = MapSet.intersection(keys_a, keys_b)
common_attributes_a =
common_attributes_keys
|> Enum.map(fn common_key ->
Enum.find(attributes_a, fn {k, _v} -> k == common_key end) |> elem(1)
end)
common_attributes_b =
common_attributes_keys
|> Enum.map(fn common_key ->
Enum.find(attributes_b, fn {k, _v} -> k == common_key end) |> elem(1)
end)
Similarity.cosine_srol(common_attributes_a, common_attributes_b)
end
@doc """
Returns a stream of all unique pairs of similarities in `%Cosine{}.map`
## Example:
s = Similarity.Cosine.new
s = s |> Similarity.Cosine.add("barna", [{"n_of_bacons", 3}, {"hair_color_r", 124}, {"hair_color_g", 188}, {"hair_color_b", 11}])
s = s |> Similarity.Cosine.add("somebody", [{"n_of_bacons", 0}, {"hair_color_r", 222}, {"hair_color_g", 62}, {"hair_color_b", 11}])
Similarity.Cosine.stream(s)
"""
def stream(%Cosine{map: map}) do
Stream.resource(
fn -> {_all_ids = Map.keys(map), map} end,
&stream_next/1,
fn _ -> nil end
)
end
@doc false
def stream_next({[_last | []], _map}) do
{:halt, nil}
end
@doc false
def stream_next({[h_id | tl_ids], map}) do
{
tl_ids |> Enum.map(fn id -> {h_id, id, do_between(map, h_id, id)} end),
{tl_ids, map}
}
end
@doc false
def add_attributes(
struct = %Cosine{attributes_counter: attributes_counter, attributes_map: attributes_map},
attributes
) do
{new_attributes_counter, new_attributes_map} =
do_add_attributes(attributes, attributes_counter, attributes_map)
%Cosine{
struct
| attributes_counter: new_attributes_counter,
attributes_map: new_attributes_map
}
end
@doc false
def do_add_attributes([], attributes_counter, attributes_map) do
{attributes_counter, attributes_map}
end
@doc false
def do_add_attributes([{key, _value} | tl], attributes_counter, attributes_map) do
if Map.has_key?(attributes_map, key) do
do_add_attributes(tl, attributes_counter, attributes_map)
else
new_attributes_map = Map.put(attributes_map, key, attributes_counter)
new_attributes_counter = attributes_counter + 1
do_add_attributes(tl, new_attributes_counter, new_attributes_map)
end
end
end
|
lib/similarity/cosine.ex
| 0.90445
| 0.50592
|
cosine.ex
|
starcoder
|
defmodule Ello.V3.Schema.DiscoveryTypes do
use Absinthe.Schema.Notation
alias Ello.V3.Resolvers
object :category do
field :id, :id
field :name, :string
field :slug, :string
field :level, :string
field :order, :integer
field :description, :string
field :tile_image, :tshirt_image_versions, resolve: fn(_args, %{source: category}) ->
{:ok, category.tile_image_struct}
end
field :allow_in_onboarding, :boolean
field :is_creator_type, :boolean
field :created_at, :datetime
field :category_users, list_of(:category_user) do
arg :roles, list_of(:category_user_role)
resolve &Resolvers.CategoryUsers.call/3
end
field :current_user_state, :category_user
field :brand_account, :user
end
object :category_search_result do
field :categories, list_of(:category)
field :is_last_page, :boolean, resolve: fn(_, _) -> {:ok, true} end
end
object :category_post do
field :id, :id
field :status, :string
field :submitted_at, :datetime
field :submitted_by, :user
field :featured_at, :datetime
field :featured_by, :user
field :unfeatured_at, :datetime
field :removed_at, :datetime
field :category, :category
field :post, :post
field :actions, :category_post_actions, resolve: &actions/2
end
object :category_post_actions do
field :feature, :category_post_action
field :unfeature, :category_post_action
end
object :category_post_action do
field :href, :string
field :label, :string
field :method, :string
end
object :category_user do
field :id, :id
field :role, :category_user_role
field :created_at, :datetime
field :updated_at, :datetime
field :category, :category
field :user, :user
end
enum :category_user_role do
value :moderator, as: "moderator"
value :curator, as: "curator"
value :featured, as: "featured"
end
object :page_header do
field :id, :id
field :user, :user
field :post_token, :string
field :slug, :string, resolve: &page_header_slug/2
field :kind, :page_header_kind, resolve: &page_header_kind/2
field :header, :string, resolve: &page_header_header/2
field :subheader, :string, resolve: &page_header_sub_header/2
field :cta_link, :page_header_cta_link, resolve: &page_header_cta_link/2
field :image, :responsive_image_versions, resolve: &page_header_image/2
field :category, :category
end
enum :page_header_kind do
value :category
value :artist_invite
value :editorial
value :authentication
value :generic
end
object :page_header_cta_link do
field :text, :string
field :url, :string
end
object :editorial_stream do
field :next, :string
field :per_page, :integer
field :is_last_page, :boolean
field :editorials, list_of(:editorial)
end
object :editorial do
field :id, :id
field :kind, :editorial_kind, resolve: &editorial_kind/2
field :title, :string, resolve: &editorial_content/2
field :subtitle, :string, resolve: &editorial_content(&1, &2, "rendered_subtitle")
field :path, :string, resolve: &editorial_content(&1, &2)
field :url, :string, resolve: &editorial_content(&1, &2)
field :post, :post
field :stream, :editorial_post_stream, resolve: &editorial_stream/2
field :one_by_one_image, :responsive_image_versions, resolve: &editorial_image/2
field :one_by_two_image, :responsive_image_versions, resolve: &editorial_image/2
field :two_by_one_image, :responsive_image_versions, resolve: &editorial_image/2
field :two_by_two_image, :responsive_image_versions, resolve: &editorial_image/2
end
enum :editorial_kind do
value :post
value :post_stream
value :internal
value :external
value :sponsored
end
object :editorial_post_stream do
field :query, :string
field :tokens, list_of(:string)
end
defp page_header_kind(_, %{source: %{category_id: _}}), do: {:ok, :category}
defp page_header_kind(_, %{source: %{is_editorial: true}}), do: {:ok, :editorial}
defp page_header_kind(_, %{source: %{is_artist_invite: true}}), do: {:ok, :artist_invite}
defp page_header_kind(_, %{source: %{is_authentication: true}}), do: {:ok, :authentication}
defp page_header_kind(_, %{source: _}), do: {:ok, :generic}
defp page_header_slug(_, %{source: %{category: %{slug: slug}}}), do: {:ok, slug}
defp page_header_slug(_, %{source: _}), do: {:ok, nil}
defp page_header_header(_, %{source: %{category: %{header: nil, name: copy}}}), do: {:ok, copy}
defp page_header_header(_, %{source: %{category: %{header: copy}}}), do: {:ok, copy}
defp page_header_header(_, %{source: %{header: copy}}), do: {:ok, copy}
defp page_header_sub_header(_, %{source: %{category: %{description: copy}}}), do: {:ok, copy}
defp page_header_sub_header(_, %{source: %{subheader: copy}}), do: {:ok, copy}
defp page_header_cta_link(_, %{source: %{category: %{cta_caption: text, cta_href: url}}}),
do: {:ok, %{text: text, url: url}}
defp page_header_cta_link(_, %{source: %{cta_caption: text, cta_href: url}}),
do: {:ok, %{text: text, url: url}}
defp page_header_image(_, %{source: %{image_struct: image}}), do: {:ok, image}
defp actions(args, %{context: %{current_user: nil}} = resolution) do
actions(args, resolution, nil)
end
defp actions(args, %{source: category_post, context: %{current_user: current_user}} = resolution) do
cat_user = Enum.find(current_user.category_users, &(&1.category_id == category_post.category.id))
actions(args, resolution, cat_user)
end
defp actions(_, %{
source: category_post,
context: %{current_user: %{is_staff: true}},
}, _) do
{:ok, %{
feature: feature_action(category_post),
unfeature: unfeature_action(category_post),
}}
end
defp actions(_, %{source: category_post}, %{role: "curator"}) do
{:ok, %{
feature: feature_action(category_post),
unfeature: unfeature_action(category_post),
}}
end
defp actions(_, _, _), do: {:ok, nil}
defp feature_action(%{id: id, status: "submitted"}), do: %{
href: "/api/v2/category_posts/#{id}/feature",
method: "put",
}
defp feature_action(_), do: nil
defp unfeature_action(%{id: id, status: "featured"}), do: %{
href: "/api/v2/category_posts/#{id}/unfeature",
method: "put",
}
defp unfeature_action(_), do: nil
@editorial_kinds %{
"post" => :post,
"curated_posts" => :post_stream,
"internal" => :internal,
"external" => :external,
"sponsored" => :sponsored,
}
defp editorial_kind(_, %{source: %{kind: kind}}), do: {:ok, @editorial_kinds[kind]}
defp editorial_content(a, %{definition: %{schema_node: %{identifier: name}}} = b),
do: editorial_content(a, b, "#{name}")
defp editorial_content(_, %{source: editorial}, key),
do: {:ok, Map.get(editorial.content, key)}
defp editorial_stream(_, %{source: %{kind: "curated_posts"} = editorial}) do
{:ok, %{
query: "findPosts",
tokens: editorial.content["post_tokens"],
}}
end
defp editorial_stream(_, _), do: {:ok, nil}
defp editorial_image(_, %{
definition: %{schema_node: %{identifier: :one_by_one_image}},
source: editorial,
}), do: one_by_one_image(editorial)
defp editorial_image(_, %{
definition: %{schema_node: %{identifier: :one_by_two_image}},
source: %{one_by_two_image_struct: nil} = editorial,
}), do: one_by_one_image(editorial)
defp editorial_image(_, %{
definition: %{schema_node: %{identifier: :one_by_two_image}},
source: %{one_by_two_image_struct: image},
}), do: {:ok, image}
defp editorial_image(_, %{
definition: %{schema_node: %{identifier: :two_by_one_image}},
source: %{two_by_one_image_struct: nil} = editorial,
}), do: one_by_one_image(editorial)
defp editorial_image(_, %{
definition: %{schema_node: %{identifier: :two_by_one_image}},
source: %{two_by_one_image_struct: image},
}), do: {:ok, image}
defp editorial_image(_, %{
definition: %{schema_node: %{identifier: :two_by_two_image}},
source: %{two_by_two_image_struct: nil} = editorial,
}), do: one_by_one_image(editorial)
defp editorial_image(_, %{
definition: %{schema_node: %{identifier: :two_by_two_image}},
source: %{two_by_two_image_struct: image},
}), do: {:ok, image}
defp one_by_one_image(%{one_by_one_image_struct: nil}), do: {:ok, nil}
defp one_by_one_image(%{one_by_one_image_struct: image}), do: {:ok, image}
end
|
apps/ello_v3/lib/ello_v3/schema/discovery_types.ex
| 0.574872
| 0.537223
|
discovery_types.ex
|
starcoder
|
defmodule Central.Helpers.StructureHelper do
@moduledoc """
A module to make import/export of JSON objects easier. Currently only tested with a single parent object and multiple sets of child objects.
Designed to not take the IDs with it as they are liable to change based on the database they go into.
"""
alias Central.Repo
import Ecto.Query, warn: false
@skip_export_fields [:__meta__, :inserted_at, :updated_at]
@skip_import_fields ~w(id)
defp query_obj(module, id) do
query =
from objects in module,
where: objects.id == ^id
Repo.one!(query)
end
defp cast_many(object, field, parent_module) do
association = parent_module.__schema__(:association, field)
object_module = association.queryable
case association.relationship do
:parent ->
:skip
:child ->
Repo.preload(object, field)
|> Map.get(field)
|> Enum.map(fn item -> cast_one(item, object_module) end)
end
end
defp cast_one(object, module) do
skip_fields =
if Kernel.function_exported?(module, :structure_export_skips, 0) do
module.structure_export_skips()
else
[]
end
object
|> Map.from_struct()
|> Enum.filter(fn {k, _} ->
not Enum.member?(@skip_export_fields, k) and not Enum.member?(skip_fields, k)
end)
|> Enum.map(fn {k, v} ->
cond do
module.__schema__(:field_source, k) -> {k, v}
module.__schema__(:association, k) -> {k, cast_many(object, k, module)}
end
end)
|> Enum.filter(fn {_, v} -> v != :skip end)
|> Map.new()
end
def export(module, id) do
query_obj(module, id)
|> cast_one(module)
end
defp import_assoc(parent_module, field, data, parent_id) when is_list(data) do
field = String.to_existing_atom(field)
assoc = parent_module.__schema__(:association, field)
data
|> Enum.map(fn item_params ->
import_assoc(assoc, item_params, parent_id)
end)
end
defp import_assoc(assoc, params, parent_id) when is_map(params) do
key = assoc.related_key |> to_string
params =
Map.put(params, key, parent_id)
|> Enum.filter(fn {k, _} -> not Enum.member?(@skip_import_fields, k) end)
|> Map.new()
module = assoc.queryable
{:ok, _new_object} =
module.changeset(module.__struct__, params)
|> Repo.insert()
end
# Given the root module and the data, this should create everything you need
def import(module, data) do
assocs =
module.__schema__(:associations)
|> Enum.map(&to_string/1)
# First, create and insert the core object
core_params =
data
|> Enum.filter(fn {k, _} ->
not Enum.member?(assocs, k) and not Enum.member?(@skip_import_fields, k)
end)
|> Map.new()
{:ok, core_object} =
module.changeset(module.__struct__, core_params)
|> Repo.insert()
# Now, lets add the assocs
data
|> Enum.filter(fn {k, _} -> Enum.member?(assocs, k) end)
|> Enum.each(fn {k, v} -> import_assoc(module, k, v, core_object.id) end)
core_object
end
end
|
lib/central/helpers/structure_helper.ex
| 0.663124
| 0.514827
|
structure_helper.ex
|
starcoder
|
defmodule Relay.Marathon.App do
@moduledoc """
Turns Marathon API JSON into consistent App objects.
"""
alias Relay.Marathon.{Labels, Networking}
@enforce_keys [:id, :networking_mode, :ports_list, :port_indices, :labels, :version]
defstruct [:id, :networking_mode, :ports_list, :port_indices, :labels, :version]
@type t :: %__MODULE__{
id: String.t(),
networking_mode: Networking.networking_mode(),
ports_list: [:inet.port_number()],
port_indices: [non_neg_integer],
labels: Labels.labels(),
version: String.t()
}
@spec from_definition(map, String.t()) :: t
def from_definition(%{"id" => id, "labels" => labels} = app, group) do
ports_list = Networking.ports_list(app)
%__MODULE__{
id: id,
networking_mode: Networking.networking_mode(app),
ports_list: ports_list,
port_indices: port_indices(ports_list, labels, group),
labels: labels,
version: version(app)
}
end
@spec from_event(map, String.t()) :: t
def from_event(
%{"eventType" => "api_post_event", "appDefinition" => definition} = _event,
group
),
do: from_definition(definition, group)
defp port_indices([], _labels, _group), do: []
@spec port_indices([:inet.port_number()], Labels.labels(), String.t()) :: [non_neg_integer]
defp port_indices(ports_list, labels, group) do
0..(length(ports_list) - 1)
|> Enum.filter(fn port_index -> Labels.marathon_lb_group(labels, port_index) == group end)
end
@spec version(map) :: String.t()
defp version(app) do
case app do
# In most cases the `lastConfigChangeAt` value should be available...
%{"versionInfo" => %{"lastConfigChangeAt" => version}} ->
version
# ...but if this is an app that hasn't been changed yet then use `version`
%{"version" => version} ->
version
end
end
@spec marathon_lb_vhost(t, non_neg_integer) :: [String.t()]
def marathon_lb_vhost(%__MODULE__{labels: labels}, port_index),
do: Labels.marathon_lb_vhost(labels, port_index)
@spec marathon_lb_redirect_to_https?(t, non_neg_integer) :: boolean
def marathon_lb_redirect_to_https?(%__MODULE__{labels: labels}, port_index),
do: Labels.marathon_lb_redirect_to_https?(labels, port_index)
@spec marathon_acme_domain(t, non_neg_integer) :: [String.t()]
def marathon_acme_domain(%__MODULE__{labels: labels}, port_index),
do: Labels.marathon_acme_domain(labels, port_index)
end
|
lib/relay/marathon/app.ex
| 0.797833
| 0.415847
|
app.ex
|
starcoder
|
defmodule Membrane.Element.Action do
@moduledoc """
This module contains type specifications of actions that can be returned
from element callbacks.
Returning actions is a way of element interaction with
other elements and parts of framework. Each action may be returned by any
callback (except for `c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_init`
and `c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_terminate`, as they
do not return any actions) unless explicitly stated otherwise.
"""
alias Membrane.{Buffer, Caps, Event, Message}
alias Membrane.Element.Pad
@typedoc """
Sends a message to the pipeline.
"""
@type message_t :: {:message, Message.t()}
@typedoc """
Sends an event through a pad (sink or source).
Forbidden when playback state is stopped.
"""
@type event_t :: {:event, {Pad.name_t(), Event.t()}}
@typedoc """
Allows to split callback execution into multiple applications of another callback
(called from now sub-callback).
Executions are synchronous in the element process, and each of them passes
subsequent arguments from the args_list, along with the element state (passed
as the last argument each time).
Return value of each execution of sub-callback can be any valid return value
of the original callback (this also means sub-callback can return any action
valid for the original callback, unless expliciltly stated). Returned actions
are executed immediately (they are NOT accumulated and executed after all
sub-callback executions are finished).
Useful when a long action is to be undertaken, and partial results need to
be returned before entire process finishes (e.g. default implementation of
`c:Membrane.Element.Base.Filter.handle_process/4` uses split action to invoke
`c:Membrane.Element.Base.Filter.handle_process1/4` with each buffer)
"""
@type split_t :: {:split, {callback_name :: atom, args_list :: [[any]]}}
@typedoc """
Sends caps through a pad (it must be source pad). Sended caps must fit
constraints on the pad.
Forbidden when playback state is stopped.
"""
@type caps_t :: {:caps, {Pad.name_t(), Caps.t()}}
@typedoc """
Sends buffers through a pad (it must be source pad).
Allowed only when playback state is playing.
"""
@type buffer_t :: {:buffer, {Pad.name_t(), Buffer.t() | [Buffer.t()]}}
@typedoc """
Makes a demand on a pad (it must be sink pad in pull mode). It does NOT
entail _sending_ demand through the pad, but just _requesting_ some amount
of data from `Membrane.Core.PullBuffer`, which _sends_ demands automatically when it
runs out of data.
If there is any data available at the pad, the data is passed to
`c:Membrane.Element.Base.Filter.handle_process/4`
or `c:Membrane.Element.Base.Sink.handle_write/4` callback. Invoked callback is
guaranteed not to receive more data than demanded.
Depending on element type and callback, it may contain different payloads or
behave differently:
In sinks:
- Payload `{pad, size}` increases demand on given pad by given size.
- Payload `{pad, {:set_to, size}}` erases current demand and sets it to given size.
In filters:
- Payload `{pad, size}` is only allowed from
`c:Membrane.Element.Base.Mixin.SourceBehaviour.handle_demand/5` callback. It overrides
current demand.
- Payload `{pad, {:source, demanding_source_pad}, size}` can be returned from
any callback. `demanding_source_pad` is a pad which is to receive demanded
buffers after they are processed.
- Payload `{pad, :self, size}` makes demand act as if element was a sink,
that is extends demand on a given pad. Buffers received as a result of the
demand should be consumed by element itself or sent through a pad in `push` mode.
Allowed only when playback state is playing.
"""
@type demand_t ::
{:demand, demand_common_payload_t | demand_filter_payload_t | demand_sink_payload_t}
@type demand_filter_payload_t ::
{Pad.name_t(), {:source, Pad.name_t()} | :self, size :: non_neg_integer}
@type demand_sink_payload_t :: {Pad.name_t(), {:set_to, size :: non_neg_integer}}
@type demand_common_payload_t :: Pad.name_t() | {Pad.name_t(), size :: non_neg_integer}
@typedoc """
Executes `c:Membrane.Element.Base.Mixin.SourceBehaviour.handle_demand/5` callback with
given pad (which must be a source pad in pull mode) if this demand is greater
than 0.
Useful when demand could not have been supplied when previous call to
`c:Membrane.Element.Base.Mixin.SourceBehaviour.handle_demand/5` happened, but some
element-specific circumstances changed and it might be possible to supply
it (at least partially).
Allowed only when playback state is playing.
"""
@type redemand_t :: {:redemand, Pad.name_t()}
@typedoc """
Sends buffers/caps/event to all source pads of element (or to sink pads when
event occurs on the source pad). Used by default implementations of
`c:Membrane.Element.Base.Mixin.SinkBehaviour.handle_caps/4` and
`c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_event/4` callbacks in filter.
Allowed only when _all_ below conditions are met:
- element is filter,
- callback is `c:Membrane.Element.Base.Filter.handle_process/4`,
`c:Membrane.Element.Base.Mixin.SinkBehaviour.handle_caps/4`
or `c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_event/4`,
- playback state is valid for sending buffer, caps or event action
respectively.
Keep in mind that `c:Membrane.Element.Base.Filter.handle_process/4` can only
forward buffers, `c:Membrane.Element.Base.Mixin.SinkBehaviour.handle_caps/4` - caps
and `c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_event/4` - events.
"""
@type forward_t :: {:forward, Buffer.t() | [Buffer.t()] | Caps.t() | Event.t()}
@typedoc """
Suspends/resumes change of playback state.
- `playback_change: :suspend` may be returned only from
`c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_prepare/3`,
`c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_play/2` and
`c:Membrane.Element.Base.Mixin.CommonBehaviour.handle_stop/2` callbacks,
and defers playback state change until `playback_change: :resume` is returned.
- `playback_change: :resume` may be returned from any callback, only when
playback state change is suspended, and causes it to finish.
There is no straight limit how long playback change can take, but keep in mind
that it may affect application quality if not done quick enough.
"""
@type playback_change_t :: {:playback_change, :suspend | :resume}
@typedoc """
Type that defines a single action that may be returned from element callbacks.
Depending on element type, callback, current playback state and other
circumstances there may be different actions available.
"""
@type t ::
event_t
| message_t
| split_t
| caps_t
| buffer_t
| demand_t
| redemand_t
| forward_t
| playback_change_t
end
|
lib/membrane/element/action.ex
| 0.968036
| 0.573141
|
action.ex
|
starcoder
|
defmodule AWS.CodeBuild do
@moduledoc """
CodeBuild is a fully managed build service in the cloud.
CodeBuild compiles your source code, runs unit tests, and produces artifacts
that are ready to deploy. CodeBuild eliminates the need to provision, manage,
and scale your own build servers. It provides prepackaged build environments for
the most popular programming languages and build tools, such as Apache Maven,
Gradle, and more. You can also fully customize build environments in CodeBuild
to use your own build tools. CodeBuild scales automatically to meet peak build
requests. You pay only for the build time you consume. For more information
about CodeBuild, see the * [CodeBuild User Guide](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html).*
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: nil,
api_version: "2016-10-06",
content_type: "application/x-amz-json-1.1",
credential_scope: nil,
endpoint_prefix: "codebuild",
global?: false,
protocol: "json",
service_id: "CodeBuild",
signature_version: "v4",
signing_name: "codebuild",
target_prefix: "CodeBuild_20161006"
}
end
@doc """
Deletes one or more builds.
"""
def batch_delete_builds(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchDeleteBuilds", input, options)
end
@doc """
Retrieves information about one or more batch builds.
"""
def batch_get_build_batches(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchGetBuildBatches", input, options)
end
@doc """
Gets information about one or more builds.
"""
def batch_get_builds(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchGetBuilds", input, options)
end
@doc """
Gets information about one or more build projects.
"""
def batch_get_projects(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchGetProjects", input, options)
end
@doc """
Returns an array of report groups.
"""
def batch_get_report_groups(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchGetReportGroups", input, options)
end
@doc """
Returns an array of reports.
"""
def batch_get_reports(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchGetReports", input, options)
end
@doc """
Creates a build project.
"""
def create_project(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateProject", input, options)
end
@doc """
Creates a report group.
A report group contains a collection of reports.
"""
def create_report_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateReportGroup", input, options)
end
@doc """
For an existing CodeBuild build project that has its source code stored in a
GitHub or Bitbucket repository, enables CodeBuild to start rebuilding the source
code every time a code change is pushed to the repository.
If you enable webhooks for an CodeBuild project, and the project is used as a
build step in CodePipeline, then two identical builds are created for each
commit. One build is triggered through webhooks, and one through CodePipeline.
Because billing is on a per-build basis, you are billed for both builds.
Therefore, if you are using CodePipeline, we recommend that you disable webhooks
in CodeBuild. In the CodeBuild console, clear the Webhook box. For more
information, see step 5 in [Change a Build Project's Settings](https://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console).
"""
def create_webhook(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateWebhook", input, options)
end
@doc """
Deletes a batch build.
"""
def delete_build_batch(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteBuildBatch", input, options)
end
@doc """
Deletes a build project.
When you delete a project, its builds are not deleted.
"""
def delete_project(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteProject", input, options)
end
@doc """
Deletes a report.
"""
def delete_report(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteReport", input, options)
end
@doc """
Deletes a report group.
Before you delete a report group, you must delete its reports.
"""
def delete_report_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteReportGroup", input, options)
end
@doc """
Deletes a resource policy that is identified by its resource ARN.
"""
def delete_resource_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteResourcePolicy", input, options)
end
@doc """
Deletes a set of GitHub, GitHub Enterprise, or Bitbucket source credentials.
"""
def delete_source_credentials(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteSourceCredentials", input, options)
end
@doc """
For an existing CodeBuild build project that has its source code stored in a
GitHub or Bitbucket repository, stops CodeBuild from rebuilding the source code
every time a code change is pushed to the repository.
"""
def delete_webhook(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteWebhook", input, options)
end
@doc """
Retrieves one or more code coverage reports.
"""
def describe_code_coverages(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeCodeCoverages", input, options)
end
@doc """
Returns a list of details about test cases for a report.
"""
def describe_test_cases(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeTestCases", input, options)
end
@doc """
Analyzes and accumulates test report values for the specified test reports.
"""
def get_report_group_trend(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetReportGroupTrend", input, options)
end
@doc """
Gets a resource policy that is identified by its resource ARN.
"""
def get_resource_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetResourcePolicy", input, options)
end
@doc """
Imports the source repository credentials for an CodeBuild project that has its
source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository.
"""
def import_source_credentials(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ImportSourceCredentials", input, options)
end
@doc """
Resets the cache for a project.
"""
def invalidate_project_cache(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "InvalidateProjectCache", input, options)
end
@doc """
Retrieves the identifiers of your build batches in the current region.
"""
def list_build_batches(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListBuildBatches", input, options)
end
@doc """
Retrieves the identifiers of the build batches for a specific project.
"""
def list_build_batches_for_project(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListBuildBatchesForProject", input, options)
end
@doc """
Gets a list of build IDs, with each build ID representing a single build.
"""
def list_builds(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListBuilds", input, options)
end
@doc """
Gets a list of build identifiers for the specified build project, with each
build identifier representing a single build.
"""
def list_builds_for_project(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListBuildsForProject", input, options)
end
@doc """
Gets information about Docker images that are managed by CodeBuild.
"""
def list_curated_environment_images(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListCuratedEnvironmentImages", input, options)
end
@doc """
Gets a list of build project names, with each build project name representing a
single build project.
"""
def list_projects(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListProjects", input, options)
end
@doc """
Gets a list ARNs for the report groups in the current Amazon Web Services
account.
"""
def list_report_groups(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListReportGroups", input, options)
end
@doc """
Returns a list of ARNs for the reports in the current Amazon Web Services
account.
"""
def list_reports(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListReports", input, options)
end
@doc """
Returns a list of ARNs for the reports that belong to a `ReportGroup`.
"""
def list_reports_for_report_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListReportsForReportGroup", input, options)
end
@doc """
Gets a list of projects that are shared with other Amazon Web Services accounts
or users.
"""
def list_shared_projects(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListSharedProjects", input, options)
end
@doc """
Gets a list of report groups that are shared with other Amazon Web Services
accounts or users.
"""
def list_shared_report_groups(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListSharedReportGroups", input, options)
end
@doc """
Returns a list of `SourceCredentialsInfo` objects.
"""
def list_source_credentials(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListSourceCredentials", input, options)
end
@doc """
Stores a resource policy for the ARN of a `Project` or `ReportGroup` object.
"""
def put_resource_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutResourcePolicy", input, options)
end
@doc """
Restarts a build.
"""
def retry_build(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "RetryBuild", input, options)
end
@doc """
Restarts a failed batch build.
Only batch builds that have failed can be retried.
"""
def retry_build_batch(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "RetryBuildBatch", input, options)
end
@doc """
Starts running a build.
"""
def start_build(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StartBuild", input, options)
end
@doc """
Starts a batch build for a project.
"""
def start_build_batch(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StartBuildBatch", input, options)
end
@doc """
Attempts to stop running a build.
"""
def stop_build(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StopBuild", input, options)
end
@doc """
Stops a running batch build.
"""
def stop_build_batch(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StopBuildBatch", input, options)
end
@doc """
Changes the settings of a build project.
"""
def update_project(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateProject", input, options)
end
@doc """
Updates a report group.
"""
def update_report_group(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateReportGroup", input, options)
end
@doc """
Updates the webhook associated with an CodeBuild build project.
If you use Bitbucket for your repository, `rotateSecret` is ignored.
"""
def update_webhook(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateWebhook", input, options)
end
end
|
lib/aws/generated/code_build.ex
| 0.78842
| 0.431704
|
code_build.ex
|
starcoder
|
defmodule ParseClient do
@moduledoc """
REST API client for Parse in Elixir
## Example usage
To get information about an object (and print out the whole response):
ParseClient.get("classes/Lumberjacks")
To just see the body, use the `query` function:
ParseClient.query("classes/Lumberjacks")
To create a new object, use the `post` function, to update an object, use
the `put` function, and to delete an object, use the `delete` function.
The `get` and `query` methods can also be used with users and roles.
## Use of filters when making queries
Queries can be filtered by using *filters* and *options*.
Use filters when you would use `where=` clauses in a request to the Parse API.
Options include "order", "limit", "count" and "include".
Both filters and options need to be Elixir maps.
## Comparisons in filters
The following filters (keys) from Parse.com are supported:
Key | Operation
--- | ---
$lt | Less than
$lte | Less than or equal to
$gt | Greater than
$gte | Greater than or equal to
$ne | Not equal to
$in | Contained in
$nin | Not contained in
$exists | A value is set for the key
$select | This matches a value for a key in the result of a different query
$dontSelect | Requires that a key's value not match a value for a key in the result of a different query
$all | Contains all of the given values
### Examples
To make a query just about animals that are aged less then 3 years old:
ParseClient.query("classes/Animals", %{"age" => %{"$lt" => 3}})
To make a query just about animals who have a name and are still alive:
ParseClient.query("classes/Animals", %{"name" => %{"$exists" => true}, "status" => 1})
"""
alias ParseClient.Requests, as: Req
alias ParseClient.Authenticate, as: Auth
@doc """
Get request for making queries.
"""
def get(url), do: Req.request!(:get, url, "", get_headers())
@doc """
Get request with filters.
## Examples
To make a query just about animals that have a name and are still alive:
ParseClient.get("classes/Animals", %{"name" => %{"$exists" => true}, "status" => 1})
To make a request with options, but no filters, use %{} as the second argument:
ParseClient.get("classes/Animals", %{}, %{"order" => "createdAt"})
"""
def get(url, filters, options \\ %{}, httpoison_options \\ []) do
filter_string = Req.parse_filters(filters, options)
Req.request!(:get, url <> "?" <> filter_string, "", get_headers(), httpoison_options)
end
@doc """
Get request for making queries. Just returns the body of the response.
"""
def query(url), do: get(url).body
@doc """
Get request for making queries with filters and options.
Just returns the body of the response.
"""
def query(url, filters, options \\ %{}) do
get(url, filters, options).body
end
@doc """
Request to create an object.
## Example
body = %{"animal" => "parrot, "name" => "NorwegianBlue", "status" => 0}
ParseClient.post("classes/Animals", body)
"""
def post(url, body, options \\ []), do: Req.request!(:post, url, body, post_headers(), options)
@doc """
Request to update an object.
## Example
ParseClient.put("classes/Animals/12345678", %{"status" => 1})
"""
def put(url, body), do: Req.request!(:put, url, body, post_headers())
@doc """
Request to delete an object.
## Example
ParseClient.delete("classes/Animals/12345678")
"""
def delete(url), do: Req.request!(:delete, url, "", get_headers())
@doc """
Request from a user to signup. The user must provide a username
and a password. The options argument refers to additional information,
such as email address or phone number, and it needs to be in the form
of an Elixir map.
## Examples
ParseClient.signup("Duchamp", "L_H;OO#Q")
ParseClient.signup("Duchamp", "L_H;OO#Q", %{"email" => "<EMAIL>"})
"""
def signup(username, password, options \\ %{}) do
data = Map.merge(%{"username" => username, "password" => password}, options)
post("users", data).body
end
@doc """
Request from a user to login. Username and password are required.
As in the signup function, username and password needs to be strings.
"""
def login(username, password) do
params = %{"username" => username, "password" => password} |> URI.encode_query
get("login?#{params}").body
end
@doc """
Request to reset a user's password.
## Example
ParseClient.request_passwd_reset("<EMAIL>")
"""
def request_passwd_reset(email) do
post("requestPasswordReset", %{"email" => email})
end
@doc """
Validates the user. Takes the user's session token as the only argument.
## Example
ParseClient.validate_user("12345678")
"""
def validate_user(token_val) do
Req.request!(:get, "users/me", "", post_headers("X-Parse-Session-Token", token_val)).body
end
@doc """
Deletes the user. Takes the user's objectid and session token as arguments.
## Example
ParseClient.delete_user("g7y9tkhB7O", "12345678")
"""
def delete_user(objectid, token_val) do
Req.request!(:delete, "users/#{objectid}", "", post_headers("X-Parse-Session-Token", token_val))
end
@doc """
Post request to upload a file.
"""
def upload_file(url, contents, content_type) do
Req.request!(:post, url, contents, post_headers("Content-Type", content_type))
end
defp get_headers do
%{"X-Parse-Application-Id" => Auth.config_parse_id,
"X-Parse-REST-API-Key" => Auth.config_parse_key}
end
defp post_headers(key \\ "Content-Type", val \\ "application/json") do
Map.put(get_headers(), key, val)
end
end
|
lib/parse_elixir_client.ex
| 0.842653
| 0.514949
|
parse_elixir_client.ex
|
starcoder
|
defmodule StreamSplit do
@enforce_keys [:continuation, :stream]
defstruct @enforce_keys
@doc """
This function is a combination of `Enum.take/2` and `Enum.drop/2` returning
first `n` dropped elements and the rest of the enum as a stream.
The important difference is that the enumerable is only iterated once, and
only for the required `n` items. The rest of the enumerable may be iterated
lazily later from the returned stream.
## Examples
iex> {head, tail} = take_and_drop(Stream.cycle(1..3), 4)
iex> head
[1, 2, 3, 1]
iex> Enum.take(tail, 7)
[2, 3, 1, 2, 3, 1, 2]
"""
@spec take_and_drop(Enumerable.t(), pos_integer) :: {List.t(), Enumerable.t()}
def take_and_drop(enum, n) when n > 0 do
case apply_reduce(enum, n) do
{:done, {_, list}} ->
{:lists.reverse(list), []}
{:suspended, {_, list}, cont} ->
stream_split = %__MODULE__{continuation: cont, stream: continuation_to_stream(cont)}
{:lists.reverse(list), stream_split}
{:halted, {_, list}} ->
{:lists.reverse(list), []}
end
end
def take_and_drop(enum, 0) do
{[], enum}
end
defp apply_reduce(%__MODULE__{continuation: cont}, n) do
cont.({:cont, {n, []}})
end
defp apply_reduce(enum, n) do
Enumerable.reduce(enum, {:cont, {n, []}}, &reducer_helper/2)
end
defp reducer_helper(item, :tail) do
{:suspend, item}
end
defp reducer_helper(item, {c, list}) when c > 1 do
{:cont, {c - 1, [item | list]}}
end
defp reducer_helper(item, {_, list}) do
{:suspend, {0, [item | list]}}
end
defp continuation_to_stream(cont) do
wrapped = fn {_, _, acc_cont} ->
case acc_cont.({:cont, :tail}) do
acc = {:suspended, item, _cont} ->
{[item], acc}
{:halted, acc} ->
{:halt, acc}
{:done, acc} ->
{:halt, acc}
end
end
cleanup = fn
{:suspended, _, acc_cont} ->
acc_cont.({:halt, nil})
_ ->
nil
end
Stream.resource(fn -> {:suspended, nil, cont} end, wrapped, cleanup)
end
@doc """
This function looks at the first `n` items in a stream. The remainder of the
enumerable is returned as a stream that may be lazily enumerated at a later
time.
You may think of this function as popping `n` items of the enumerable, then
pushing them back after making a copy.
Use this function with a stream to peek at items, but not iterate a stream
with side effects more than once.
## Examples
iex> {head, new_enum} = peek(Stream.cycle(1..3), 4)
iex> head
[1, 2, 3, 1]
iex> Enum.take(new_enum, 7)
[1, 2, 3, 1, 2, 3, 1]
"""
@spec peek(Enumerable.t(), pos_integer) :: {List.t(), Enumerable.t()}
def peek(enum, n) when n >= 0 do
{h, t} = take_and_drop(enum, n)
{h, Stream.concat(h, t)}
end
@doc """
This function may be seen as splitting head and tail for a `List`, but for
enumerables.
It is implemented on top of `take_and_drop/2`
## Examples
iex> {head, tail} = pop(Stream.cycle(1..3))
iex> head
1
iex> Enum.take(tail, 7)
[2, 3, 1, 2, 3, 1, 2]
"""
@spec pop(Enumerable.t()) :: {any, Enumerable.t()}
def pop(enum) do
case take_and_drop(enum, 1) do
{[], []} -> {[], []}
{[h], rest} -> {h, rest}
end
end
end
defimpl Enumerable, for: StreamSplit do
def count(_stream_split), do: {:error, __MODULE__}
def member?(_stream_split, _value), do: {:error, __MODULE__}
def slice(_stream_split), do: {:error, __MODULE__}
def reduce(%StreamSplit{stream: stream}, acc, fun) do
Enumerable.reduce(stream, acc, fun)
end
end
|
lib/stream_split.ex
| 0.840864
| 0.663742
|
stream_split.ex
|
starcoder
|
defmodule Nx.Defn do
@moduledoc ~S"""
Numerical functions.
A numerical function is a subset of Elixir tailored for
numerical computations. For example, the following function:
defn add_and_mult(a, b, c) do
a * b + c
end
will work with scalars, vector, matrices, and n-dimensional
tensors. Depending on your compiler of choice, the code can even
be JIT-compiled or AOT-compiled and run either on the CPU or GPU.
To support these features, `defn` is a subset of Elixir. It
replaces Elixir's `Kernel` by `Nx.Defn.Kernel`. `Nx.Defn.Kernel`
provides tensor-aware operators, such as `+`, `-`, etc, while
also preserving many high-level constructs known to Elixir
developers, such as pipe operator, aliases, conditionals,
pattern-matching, the access syntax, and more:
For example, the code above can also be written as:
defn add_and_mult(a, b, c) do
a
|> Nx.multiply(b)
|> Nx.add(c)
end
Please consult `Nx.Defn.Kernel` for a complete reference.
## Operators
`defn` attempts to keep as close to the Elixir semantics as
possible but that's not achievable. For example, mathematical
and bitwise operators (`+`, `-`, `&&&`, `<<<`, etc.) in Elixir
work on numbers, which means mapping them to tensors is
straight-forward and they largely preserve the same semantics,
except they are now multi-dimensional.
On the other hand, the logical operators `and`, `or`, and `not`
work with booleans in Elixir (`true` and `false`), which map
to `0` and `1` in `defn`.
Therefore, when working with logical operators inside `defn`,
`0` is considered `false` and all other numbers are considered
`true`, which is represented as the number `1`. For example, in
`defn`, `0 and 1` as well as `0 and 2` return `0`, while
`1 and 1` or `1 and -1` will return `1`.
The same semantics apply to conditional expressions inside `defn`.
## JIT compilers
The power of `Nx.Defn` is given by its compilers. The default
compiler is the `Nx.Defn` module itself, which executes the code
in pure Elixir. However, you can use module attributes to specify
how a `defn` function will behave. For example, assuming you
are using the `EXLA` compiler:
@defn_compiler {EXLA, client: :host}
defn add_and_mult(a, b, c) do
a * b + c
end
To set the compiler for the all definitions, you can set the
`@default_defn_compiler` attribute:
@default_defn_compiler {EXLA, client: :cuda}
`defn` functions are compiled when they are invoked, based on
the type and shapes of the tensors given as arguments. Once
invoked for the first time, the compilation is cached based
on the tensors shapes and types. Calling the same function with
a tensor of different values but same shape and type means no
further compilation is performed.
Also note that the defn compiler only applies to the first
call to `defn`. All other calls that happen within that `defn`
will use the same compiler. For example, imagine this code:
@defn_compiler Nx.Defn.Evaluator # the default
defn add(a, b), do: do_add(a, b)
@defn_compiler EXLA
defnp do_add(a, b), do: a + b
When calling `add/2` directly, even though it calls `do_add/2`
which uses EXLA, the call to `add/2` will be compiled with
`Nx.Defn` and `Nx.Defn` exclusively. In other words, only the
entry-point compiler matters.
For those interested in writing custom compilers, see `Nx.Defn.Compiler`.
## Inputs and outputs types
The inputs to `defn` functions must be either tuples, numbers,
or tensors. To pass non-numerical values to numerical definitions,
they must be declared as default arguments (see next subsection).
`defn` functions can only return tensors or tuples of tensors.
### Default arguments
`defn` functions also support default arguments. They are typically
used as options. For example, imagine you want to create a function
named zeros, which returns a tensor of zeroes with a given type and
shape. It could be implemented like this:
defn zeros(opts \\ []) do
opts = keyword!(opts, type: {:f, 32}, shape: {})
Nx.broadcast(Nx.tensor(0, type: opts[:type]), opts[:shape])
end
The function above accepts `opts` which are then validated and given
default values via the `keyword!/2` function. Note that while it is
possible to access options via the `Access` syntax, such as `opts[:shape]`,
it is not possible to directly call functions in the `Keyword` module
inside `defn`. To freely manipulate any Elixir value inside `defn`,
you have to use transforms, as described in the "Invoking custom Elixir
code" section.
When it comes to JIT compilation, it is important to notice that each
different set of options will lead to a different compilation of the
numerical function. Also note that, if tensors are given as default
arguments, the whole tensor will be used as the compilation key. So
even if you pass different tensors with the same type and shape, it
will lead to different compilation artifacts. For this reason, it
is **extremely discouraged to pass tensors through default arguments**.
### Tuples and pattern matching
When passing tuples as inputs to `defn` functions, the tuples
must be matched on the function head. For example, this is valid:
defn my_example({a, b}, c), do: a * b + c
This is not:
defn my_example(ab, c) do
{a, b} = ab
a * b + c
end
This, however, works:
defn my_example({_, _} = ab, c) do
{a, b} = ab
a * b + c
end
In other words, it is important for `defn` to the see the shapes
of the input. If you write the latter format and call `defn` from
Elixir, `defn` will raise.
## Invoking custom Elixir code
Inside `defn` you can only call other `defn` functions and
the functions in the `Nx` module. However, it is possible
to use transforms to invoke any Elixir code:
defn add_and_mult(a, b, c) do
res = a * b + c
transform(res, &IO.inspect/1)
end
For example, the code above invokes `&IO.inspect/1`, which is
not a `defn` function, with the value of `res`. This is useful
as it allows developers to transform `defn` code at runtime,
in order to optimize, add new properties, and so on.
Transforms can also be used to manipulate Elixir data structures,
such as options. For example, imagine you want to support options
where the :axis key is required. While you can't invoke `Keyword`
directly, you can do it via a transform:
defn sum_axis(t, opts \\ []) do
opts = keyword!(opts, [:axis])
axis = transform(opts, &Keyword.fetch!(opts, :axis))
Nx.sum(t, axes: [axis])
end
"""
@doc """
Invokes the anonymous function with asynchronously with
just-in-time compilation.
The anonymous function will be invoked with tensor expressions
which are JIT compiled and then invoked. The anonymous function
will also run outside of the current Elixir process. You can
retrieve the result by calling `Nx.Async.await!/1`. Take the
following definition:
defn softmax(t), do: Nx.exp(t) / Nx.sum(Nx.exp(t))
We can invoke it asynchronously as follows:
some_struct = Nx.Defn.async(&Mod.softmax/1, [my_tensor], EXLA)
Nx.Async.await!(some_struct)
**Note:** similar to `jit/4`, `async/4` will ignore the `@defn_compiler`
on the executed function. Be sure to pass the `compiler` and its `opts`
as arguments instead.
"""
def async(fun, args, compiler \\ Nx.Defn.Evaluator, opts \\ [])
when is_function(fun) and is_list(args) and is_atom(compiler) and is_list(opts) do
Nx.Defn.Compiler.__async__(fun, args, compiler, opts)
end
@doc """
Invokes the anonymous function with just-in-time compilation.
The anonymous function will be invoked with tensor expressions
which are JIT compiled and then invoked. For example, take the
following definition:
defn softmax(t), do: Nx.exp(t) / Nx.sum(Nx.exp(t))
**Note:** `jit/4` will ignore the `@defn_compiler` on the executed
function. Be sure to pass the `compiler` and its `opts` as arguments
instead:
Nx.Defn.jit(&Mod.softmax/1, [my_tensor], EXLA)
Nx.Defn.jit(&Mod.softmax/1, [my_tensor], EXLA, run_options: [keep_on_device: true])
"""
def jit(fun, args, compiler \\ Nx.Defn.Evaluator, opts \\ [])
when is_function(fun) and is_list(args) and is_atom(compiler) and is_list(opts) do
Nx.Defn.Compiler.__jit__(fun, args, compiler, opts)
end
@doc """
Exports the ahead-of-time (AOT) definition of a module with the
given `functions` using the given `compiler`. For example:
functions = [
{:softmax, &Nx.exp(&1)/Nx.sum(Nx.exp(&1)), [Nx.template({100, 100}, {:f, 32})]},
{:normalize, &Nx.divide(&1, 255), [Nx.template({100, 100}, {:f, 32})]}
]
:ok = Nx.Defn.export_aot("priv", MyModule, functions, EXLA)
The above will export a module definition called `MyModule`
to the given directory with `softmax/1` and `normalize/1` as
functions that expects f32 tensors that are 100x100 of shape.
This definition can then be imported at will.
`functions` is a list of 3- or 4-element tuples. The first element
is the function name, the second is an anonymous function that returns
the tensor expression for a given function, the third is a list of
the arguments as templates, and the fourth is an option list of
options for the given tensor expression (often similar to the options
you would pass on the call to `jit/2`).
The options to each function as long as the `aot_options` are specific
to the given compiler.
## AOT export with Mix
Ahead-of-time exports with Mix are useful because you only need
the compilation environment, such as EXLA, when exporting.
In practice, you can do this:
1. Add `{:exla, ..., only: :export_aot}` as a dependency
2. Define an exporting script at `script/export_my_module.exs`
3. Run the script to export the AOT `mix run script/export_my_module.exs`
4. Now inside `lib/my_module.ex` you can import it:
if File.exists?("priv/MyModule.nx.aot") do
defmodule MyModule do
Nx.Defn.import_aot("priv", __MODULE__)
end
else
IO.warn "Skipping MyModule because aot definition was not found"
end
"""
def export_aot(dir, module, functions, compiler, aot_opts \\ [])
when is_binary(dir) and is_atom(module) and is_list(functions) and is_atom(compiler) and
is_list(aot_opts) do
functions =
for tuple <- functions do
case tuple do
{name, fun, args} ->
{name, fun, args, []}
{name, fun, args, opts} ->
{name, fun, args, opts}
_ ->
raise ArgumentError,
"expected 3- or 4-element tuples as functions, got: #{inspect(tuple)}"
end
end
Nx.Defn.Compiler.__export_aot__(dir, module, functions, compiler, aot_opts)
end
@doc """
Imports a previousy exported AOT definition for `module` at `dir`.
See `export_aot/4` for more information.
"""
def import_aot(dir, module) when is_binary(dir) and is_atom(module) do
unless Module.open?(module) do
raise ArgumentError,
"""
cannot import_aot/2 for #{inspect(module)} because module was already defined.
You should call import_aot/2 while the module is being defined:
defmodule MyModule do
Nx.Defn.import_aot("priv", MyModule)
end
"""
end
Nx.Defn.Compiler.__import_aot__(dir, module, true)
end
@doc """
Defines a `module` by compiling an ahead-of-time (AOT) definition
with the given `functions` using the given `compiler`.
For example:
functions = [
{:softmax, &Nx.exp(&1)/Nx.sum(Nx.exp(&1)), [Nx.template({100, 100}, {:f, 32})]},
{:normalize, &Nx.divide(&1, 255), [Nx.template({100, 100}, {:f, 32})]}
]
Nx.Defn.aot(MyModule, functions, EXLA)
The above will define a module called `MyModule` with
`softmax/1` and `normalize/1` as functions that expects
f32 tensors that are 100x100 of shape.
While this function defines the module immediately, in
practice most developers will use `export_aot` to export
the AOT definition and then use `import_aot` to import it.
This is useful because you only need the compilation
environment, such as EXLA, only when exporting. In practice,
this function is equivalent to the following:
:ok = Nx.Defn.export_aot("priv/my_app/aot", MyModule, functions, EXLA)
defmodule MyModule do
Nx.Defn.import_aot("priv/my_app/aot", __MODULE__)
end
See `export_aot/5` for more information.
"""
def aot(module, functions, compiler, aot_opts \\ [])
when is_atom(module) and is_list(functions) and is_atom(compiler) and is_list(aot_opts) do
output_dir = Path.join(System.tmp_dir(), "elixir-nx/aot#{System.unique_integer()}")
try do
case export_aot(output_dir, module, functions, compiler, aot_opts) do
:ok ->
defmodule module do
@moduledoc false
Nx.Defn.Compiler.__import_aot__(output_dir, module, false)
:ok
end
{:error, exception} ->
raise exception
end
after
File.rm_rf!(output_dir)
end
end
@doc """
Defines a public numerical function.
"""
defmacro defn(call, do: block) do
define(:def, call, block, __CALLER__)
end
@doc """
Defines a private numerical function.
Private numerical functions are always inlined by
their callers at compilation time. This happens to
all local function calls within `defn`.
"""
defmacro defnp(call, do: block) do
define(:defp, call, block, __CALLER__)
end
## Callbacks
defp define(kind, call, block, env) do
assert_no_guards!(kind, call, env)
# Note name here is not necessarily an atom due to unquote(name) support
{name, args} = decompose_call!(kind, call, env)
defaults = for {{:\\, _, [_, _]}, i} <- Enum.with_index(args), do: i
arity = length(args)
quote do
unquote(__MODULE__).__define__(
__MODULE__,
unquote(kind),
unquote(name),
unquote(arity),
unquote(defaults)
)
unquote(kind)(unquote(call)) do
use Nx.Defn.Kernel
unquote(block)
end
end
end
defp decompose_call!(_kind, {{:unquote, _, [name]}, _, args}, _env) do
{name, args}
end
defp decompose_call!(kind, call, env) do
case Macro.decompose_call(call) do
{name, args} ->
{name, args}
:error ->
compile_error!(
env,
"first argument of #{kind}n must be a call, got: #{Macro.to_string(call)}"
)
end
end
defp assert_no_guards!(kind, {:when, _, _}, env) do
compile_error!(env, "guards are not supported by #{kind}n")
end
defp assert_no_guards!(_kind, _call, _env), do: :ok
# Internal attributes
@exports_key :__defn_exports__
# Per-defn attributes
@defn_compiler :defn_compiler
# Module attributes
@default_defn_compiler :default_defn_compiler
@doc false
def __define__(module, kind, name, arity, defaults) do
exports =
if exports = Module.get_attribute(module, @exports_key) do
exports
else
Module.put_attribute(module, :before_compile, __MODULE__)
%{}
end
compiler =
Module.delete_attribute(module, @defn_compiler) ||
Module.get_attribute(module, @default_defn_compiler) ||
Nx.Defn.Evaluator
exports =
Map.put(exports, {name, arity}, %{
kind: kind,
compiler: normalize_compiler!(compiler),
defaults: defaults
})
Module.put_attribute(module, @exports_key, exports)
:ok
end
defp normalize_compiler!(atom) when is_atom(atom), do: {atom, []}
defp normalize_compiler!({atom, term}) when is_atom(atom), do: {atom, term}
defp normalize_compiler!(other) do
raise ArgumentError,
"expected @defn_compiler/@default_defn_compiler to be an atom or " <>
"a tuple with an atom as first element, got: #{inspect(other)}"
end
defp compile_error!(env, description) do
raise CompileError, line: env.line, file: env.file, description: description
end
@doc false
defmacro __before_compile__(env) do
exports = Module.get_attribute(env.module, @exports_key)
Nx.Defn.Compiler.__compile__(env, exports)
end
end
|
lib/nx/defn.ex
| 0.926304
| 0.843895
|
defn.ex
|
starcoder
|
defmodule Membrane.FLV.Muxer do
@moduledoc """
Element for muxing AAC and H264 streams into FLV format.
Input pads are dynamic, but you nend to connect them before transitioning to state `playing`.
Due to limitations of the FLV format, only one audio and one video stream can be muxed and they both need to have a stream_id of 0.
Therefore, please make sure you only use the following pads:
- `Pad.ref(:audio, 0)`
- `Pad.ref(:video, 0)`
"""
use Membrane.Filter
alias Membrane.{AAC, Buffer, FLV}
alias Membrane.FLV.{Header, Packet, Serializer}
def_input_pad :audio,
availability: :on_request,
caps: {AAC, encapsulation: :none},
mode: :pull,
demand_unit: :buffers
def_input_pad :video,
availability: :on_request,
caps: Membrane.MP4.Payload,
mode: :pull,
demand_unit: :buffers
def_output_pad :output,
availability: :always,
caps: {Membrane.RemoteStream, content_format: FLV},
mode: :pull
@impl true
def handle_init(_opts) do
{:ok,
%{
previous_tag_size: 0,
last_dts: %{},
header_sent: false
}}
end
@impl true
def handle_pad_added(Pad.ref(_type, stream_id), _ctx, _state) when stream_id != 0,
do: raise(ArgumentError, message: "Stream id must always be 0")
@impl true
def handle_pad_added(_pad, ctx, _state) when ctx.playback_state == :playing,
do: raise("Adding pads after transition to state :playing is not allowed")
@impl true
def handle_pad_added(Pad.ref(_type, 0) = pad, _ctx, state) do
state = put_in(state, [:last_dts, pad], 0)
{:ok, state}
end
@impl true
def handle_prepared_to_playing(ctx, state) do
{actions, state} =
%Header{
audio_present?: has_stream?(:audio, ctx),
video_present?: has_stream?(:video, ctx)
}
|> prepare_to_send(state)
{{:ok, actions}, state}
end
@impl true
def handle_demand(:output, _size, :buffers, _ctx, state) do
# We will request one buffer from the stream that has the lowest timestamp
# This will ensure that the output stream has reasonable audio / video balance
{pad, _dts} = Enum.min_by(state.last_dts, &Bunch.value/1)
{{:ok, [demand: {pad, 1}]}, state}
end
@impl true
def handle_process(Pad.ref(type, stream_id) = pad, buffer, ctx, state) do
if ctx.pads[pad].caps == nil,
do: raise("Caps must be sent before sending a packet")
dts = get_timestamp(buffer.dts || buffer.pts)
pts = get_timestamp(buffer.pts || dts)
state = put_in(state, [:last_dts, pad], dts)
{actions, state} =
%Packet{
type: type,
stream_id: stream_id,
payload: buffer.payload,
codec: codec(type),
pts: pts,
dts: dts,
frame_type:
if(type == :audio or buffer.metadata.h264.key_frame?, do: :keyframe, else: :interframe)
}
|> prepare_to_send(state)
{{:ok, actions ++ [redemand: :output]}, state}
end
@impl true
def handle_caps(Pad.ref(:audio, stream_id) = pad, %AAC{} = caps, _ctx, state) do
timestamp = Map.get(state.last_dts, pad, 0) |> get_timestamp()
%Packet{
type: :audio_config,
stream_id: stream_id,
payload: Serializer.aac_to_audio_specific_config(caps),
codec: codec(:audio),
pts: timestamp,
dts: timestamp
}
|> prepare_to_send(state)
|> then(fn {actions, state} -> {{:ok, actions}, state} end)
end
@impl true
def handle_caps(
Pad.ref(:video, stream_id) = pad,
%Membrane.MP4.Payload{content: %Membrane.MP4.Payload.AVC1{avcc: config}} = _caps,
_ctx,
state
) do
timestamp = Map.get(state.last_dts, pad, 0) |> get_timestamp()
%Packet{
type: :video_config,
stream_id: stream_id,
payload: config,
codec: codec(:video),
pts: timestamp,
dts: timestamp
}
|> prepare_to_send(state)
|> then(fn {actions, state} -> {{:ok, actions}, state} end)
end
@impl true
def handle_caps(Pad.ref(type, _id) = _pad, caps, _ctx, _state),
do: raise("Caps `#{inspect(caps)}` are not supported for stream type #{inspect(type)}")
@impl true
def handle_end_of_stream(pad, ctx, state) do
# Check if there are any input pads that didn't eos. If not, send end of stream on output
state = Map.update!(state, :last_dts, &Map.delete(&1, pad))
if Enum.any?(ctx.pads, &match?({_, %{direction: :input, end_of_stream?: false}}, &1)) do
{:ok, state}
else
last = <<state.previous_tag_size::32>>
{{:ok, buffer: {:output, %Buffer{payload: last}}, end_of_stream: :output}, state}
end
end
defp codec(:audio), do: :AAC
defp codec(:video), do: :H264
defp get_timestamp(timestamp) when is_nil(timestamp), do: nil
defp get_timestamp(timestamp),
do:
Ratio.floor(timestamp)
|> Membrane.Time.as_milliseconds()
|> Ratio.floor()
defp prepare_to_send(segment, state) do
{tag, previous_tag_size} = Serializer.serialize(segment, state.previous_tag_size)
actions = [
caps: {:output, %Membrane.RemoteStream{content_format: FLV}},
buffer: {:output, %Buffer{payload: tag}}
]
state = Map.put(state, :previous_tag_size, previous_tag_size)
{actions, state}
end
defp has_stream?(type, ctx), do: ctx.pads |> Enum.any?(&match?({Pad.ref(^type, _), _value}, &1))
end
|
lib/membrane_flv_plugin/muxer.ex
| 0.84338
| 0.456773
|
muxer.ex
|
starcoder
|
defmodule Phoenix.PubSub do
@moduledoc """
Front-end to Phoenix pubsub layer.
Used internally by Channels for pubsub broadcast but
also provides an API for direct usage.
## Adapters
Phoenix pubsub was designed to be flexible and support
multiple backends. We currently ship with two backends:
* `Phoenix.PubSub.PG2` - uses Distributed Elixir,
directly exchanging notifications between servers
* `Phoenix.PubSub.Redis` - uses Redis to exchange
data between servers
Pubsub adapters are often configured in your endpoint:
config :my_app, MyApp.Endpoint,
pubsub: [adapter: Phoenix.PubSub.PG2,
pool_size: 1,
name: MyApp.PubSub]
The configuration above takes care of starting the
pubsub backend and exposing its functions via the
endpoint module. If no adapter but a name is given,
nothing will be started, but the pubsub system will
work by sending events and subscribing to the given
name.
## Direct usage
It is also possible to use `Phoenix.PubSub` directly
or even run your own pubsub backends outside of an
Endpoint.
The first step is to start the adapter of choice in your
supervision tree:
supervisor(Phoenix.PubSub.Redis, [:my_pubsub, host: "192.168.100.1"])
The configuration above will start a Redis pubsub and
register it with name `:my_pubsub`.
You can now use the functions in this module to subscribe
and broadcast messages:
iex> PubSub.subscribe :my_pubsub, "user:123"
:ok
iex> Process.info(self())[:messages]
[]
iex> PubSub.broadcast :my_pubsub, "user:123", {:user_update, %{id: 123, name: "Shane"}}
:ok
iex> Process.info(self())[:messages]
{:user_update, %{id: 123, name: "Shane"}}
## Implementing your own adapter
PubSub adapters run inside their own supervision tree.
If you are interested in providing your own adapter, let's
call it `Phoenix.PubSub.MyQueue`, the first step is to provide
a supervisor module that receives the server name and a bunch
of options on `start_link/2`:
defmodule Phoenix.PubSub.MyQueue do
def start_link(name, options) do
Supervisor.start_link(__MODULE__, {name, options},
name: Module.concat(name, Supervisor))
end
def init({name, options}) do
...
end
end
On `init/1`, you will define the supervision tree and use the given
`name` to register the main pubsub process locally. This process must
be able to handle the following GenServer calls:
* `subscribe` - subscribes the given pid to the given topic
sends: `{:subscribe, pid, topic, opts}`
respond with: `:ok | {:error, reason} | {:perform, {m, f, a}}`
* `unsubscribe` - unsubscribes the given pid from the given topic
sends: `{:unsubscribe, pid, topic}`
respond with: `:ok | {:error, reason} | {:perform, {m, f, a}}`
* `broadcast` - broadcasts a message on the given topic
sends: `{:broadcast, :none | pid, topic, message}`
respond with: `:ok | {:error, reason} | {:perform, {m, f, a}}`
### Offloading work to clients via MFA response
The `Phoenix.PubSub` API allows any of its functions to handle a
response from the adapter matching `{:perform, {m, f, a}}`. The PubSub
client will recursively invoke all MFA responses until a result is
returned. This is useful for offloading work to clients without blocking
your PubSub adapter. See `Phoenix.PubSub.PG2` implementation for examples.
"""
@type node_name :: atom | binary
defmodule BroadcastError do
defexception [:message]
def exception(msg) do
%BroadcastError{message: "broadcast failed with #{inspect msg}"}
end
end
@doc """
Subscribes the caller to the PubSub adapter's topic.
* `server` - The Pid registered name of the server
* `topic` - The topic to subscribe to, for example: `"users:123"`
* `opts` - The optional list of options. See below.
## Duplicate Subscriptions
Callers should only subscribe to a given topic a single time.
Duplicate subscriptions for a Pid/topic pair are allowed and
will cause duplicate events to be sent; however, when using
`Phoenix.PubSub.unsubscribe/3`, all duplicate subscriptions
will be dropped.
## Options
* `:link` - links the subscriber to the pubsub adapter
* `:fastlane` - Provides a fastlane path for the broadcasts for
`%Phoenix.Socket.Broadcast{}` events. The fastlane process is
notified of a cached message instead of the normal subscriber.
Fastlane handlers must implement `fastlane/1` callbacks which accepts
a `Phoenix.Socket.Broadcast` struct and returns a fastlaned format
for the handler. For example:
PubSub.subscribe(MyApp.PubSub, "topic1",
fastlane: {fast_pid, Phoenix.Transports.WebSocketSerializer, ["event1"]})
"""
@spec subscribe(atom, pid, binary) :: :ok | {:error, term}
def subscribe(server, pid, topic)
when is_atom(server) and is_pid(pid) and is_binary(topic) do
subscribe(server, pid, topic, [])
end
@spec subscribe(atom, binary, Keyword.t) :: :ok | {:error, term}
def subscribe(server, topic, opts)
when is_atom(server) and is_binary(topic) and is_list(opts) do
call(server, :subscribe, [self(), topic, opts])
end
@spec subscribe(atom, binary) :: :ok | {:error, term}
def subscribe(server, topic) when is_atom(server) and is_binary(topic) do
subscribe(server, topic, [])
end
@spec subscribe(atom, pid, binary, Keyword.t) :: :ok | {:error, term}
def subscribe(server, pid, topic, opts) do
IO.write :stderr, "[warning] Passing a Pid to Phoenix.PubSub.subscribe is deprecated. " <>
"Only the calling process may subscribe to topics"
call(server, :subscribe, [pid, topic, opts])
end
@doc """
Unsubscribes the caller from the PubSub adapter's topic.
"""
@spec unsubscribe(atom, pid, binary) :: :ok | {:error, term}
def unsubscribe(server, pid, topic) when is_atom(server) do
IO.write :stderr, "[warning] Passing a Pid to Phoenix.PubSub.unsubscribe is deprecated. " <>
"Only the calling process may unsubscribe from topics"
call(server, :unsubscribe, [pid, topic])
end
@spec unsubscribe(atom, binary) :: :ok | {:error, term}
def unsubscribe(server, topic) when is_atom(server) do
call(server, :unsubscribe, [self(), topic])
end
@doc """
Broadcasts message on given topic.
* `server` - The Pid or registered server name and optional node to
scope the broadcast, for example: `MyApp.PubSub`, `{MyApp.PubSub, :a@node}`
* `topic` - The topic to broadcast to, ie: `"users:123"`
* `message` - The payload of the broadcast
"""
@spec broadcast(atom, binary, term) :: :ok | {:error, term}
def broadcast(server, topic, message) when is_atom(server) or is_tuple(server),
do: call(server, :broadcast, [:none, topic, message])
@doc """
Broadcasts message on given topic, to a single node.
* `node` - The name of the node to broadcast the message on
* `server` - The Pid or registered server name and optional node to
scope the broadcast, for example: `MyApp.PubSub`, `{MyApp.PubSub, :a@node}`
* `topic` - The topic to broadcast to, ie: `"users:123"`
* `message` - The payload of the broadcast
"""
@spec direct_broadcast(node_name, atom, binary, term) :: :ok | {:error, term}
def direct_broadcast(node_name, server, topic, message) when is_atom(server),
do: call(server, :direct_broadcast, [node_name, :none, topic, message])
@doc """
Broadcasts message on given topic.
Raises `Phoenix.PubSub.BroadcastError` if broadcast fails.
See `Phoenix.PubSub.broadcast/3` for usage details.
"""
@spec broadcast!(atom, binary, term) :: :ok | no_return
def broadcast!(server, topic, message) do
case broadcast(server, topic, message) do
:ok -> :ok
{:error, reason} -> raise BroadcastError, message: reason
end
end
@doc """
Broadcasts message on given topic, to a single node.
Raises `Phoenix.PubSub.BroadcastError` if broadcast fails.
See `Phoenix.PubSub.broadcast/3` for usage details.
"""
@spec direct_broadcast!(node_name, atom, binary, term) :: :ok | no_return
def direct_broadcast!(node_name, server, topic, message) do
case direct_broadcast(node_name, server, topic, message) do
:ok -> :ok
{:error, reason} -> raise BroadcastError, message: reason
end
end
@doc """
Broadcasts message to all but `from_pid` on given topic.
See `Phoenix.PubSub.broadcast/3` for usage details.
"""
@spec broadcast_from(atom, pid, binary, term) :: :ok | {:error, term}
def broadcast_from(server, from_pid, topic, message) when is_atom(server) and is_pid(from_pid),
do: call(server, :broadcast, [from_pid, topic, message])
@doc """
Broadcasts message to all but `from_pid` on given topic, to a single node.
See `Phoenix.PubSub.broadcast/3` for usage details.
"""
@spec direct_broadcast_from(node_name, atom, pid, binary, term) :: :ok | {:error, term}
def direct_broadcast_from(node_name, server, from_pid, topic, message)
when is_atom(server) and is_pid(from_pid),
do: call(server, :direct_broadcast, [node_name, from_pid, topic, message])
@doc """
Broadcasts message to all but `from_pid` on given topic.
Raises `Phoenix.PubSub.BroadcastError` if broadcast fails.
See `Phoenix.PubSub.broadcast/3` for usage details.
"""
@spec broadcast_from!(atom | {atom, atom}, pid, binary, term) :: :ok | no_return
def broadcast_from!(server, from_pid, topic, message) when is_atom(server) and is_pid(from_pid) do
case broadcast_from(server, from_pid, topic, message) do
:ok -> :ok
{:error, reason} -> raise BroadcastError, message: reason
end
end
@doc """
Broadcasts message to all but `from_pid` on given topic, to a single node.
Raises `Phoenix.PubSub.BroadcastError` if broadcast fails.
See `Phoenix.PubSub.broadcast/3` for usage details.
"""
@spec direct_broadcast_from!(node_name, atom, pid, binary, term) :: :ok | no_return
def direct_broadcast_from!(node_name, server, from_pid, topic, message)
when is_atom(server) and is_pid(from_pid) do
case direct_broadcast_from(node_name, server, from_pid, topic, message) do
:ok -> :ok
{:error, reason} -> raise BroadcastError, message: reason
end
end
@doc """
Returns the node name of the PubSub server.
"""
@spec node_name(atom) :: atom :: binary
def node_name(server) do
call(server, :node_name, [])
end
defp call(server, kind, args) do
[{^kind, module, head}] = :ets.lookup(server, kind)
apply(module, kind, head ++ args)
end
end
|
deps/phoenix_pubsub/lib/phoenix/pubsub.ex
| 0.908634
| 0.430207
|
pubsub.ex
|
starcoder
|
defmodule Absinthe.Resolution.Helpers do
@moduledoc """
Handy functions for returning async or batched resolution functions
Using `Absinthe.Schema.Notation` or (by extension) `Absinthe.Schema` will
automatically import the `batch` and `async` helpers. Dataloader helpers
require an explicit `import Absinthe.Resolution.Helpers` invocation, since
dataloader is an optional dependency.
"""
alias Absinthe.Middleware
@doc """
Execute resolution field asynchronously.
This is a helper function for using the `Absinthe.Middleware.Async`.
Forbidden in mutation fields. (TODO: actually enforce this)
## Options
- `:timeout` default: `30_000`. The maximum timeout to wait for running
the task.
## Example
Using the `Absinthe.Resolution.Helpers.async/1` helper function:
```elixir
field :time_consuming, :thing do
resolve fn _, _, _ ->
async(fn ->
{:ok, long_time_consuming_function()}
end)
end
end
```
"""
@spec async((() -> term)) :: {:middleware, Middleware.Async, term}
@spec async((() -> term), opts :: [{:timeout, pos_integer}]) ::
{:middleware, Middleware.Async, term}
def async(fun, opts \\ []) do
{:middleware, Middleware.Async, {fun, opts}}
end
@doc """
Batch the resolution of several functions together.
Helper function for creating `Absinthe.Middleware.Batch`
## Options
- `:timeout` default: `5_000`. The maximum timeout to wait for running
a batch.
## Example
Raw usage:
```elixir
object :post do
field :name, :string
field :author, :user do
resolve fn post, _, _ ->
batch({__MODULE__, :users_by_id}, post.author_id, fn batch_results ->
{:ok, Map.get(batch_results, post.author_id)}
end)
end
end
end
def users_by_id(_, user_ids) do
users = Repo.all from u in User, where: u.id in ^user_ids
Map.new(users, fn user -> {user.id, user} end)
end
```
"""
@spec batch(Middleware.Batch.batch_fun(), term, Middleware.Batch.post_batch_fun()) ::
{:middleware, Middleware.Batch, term}
@spec batch(
Middleware.Batch.batch_fun(),
term,
Middleware.Batch.post_batch_fun(),
opts :: [{:timeout, pos_integer}]
) :: {:middleware, Middleware.Batch, term}
def batch(batch_fun, batch_data, post_batch_fun, opts \\ []) do
batch_config = {batch_fun, batch_data, post_batch_fun, opts}
{:middleware, Middleware.Batch, batch_config}
end
if Code.ensure_loaded?(Dataloader) do
@doc """
Dataloader helper function
This function is not imported by default. To make it available in your module do
```
import Absinthe.Resolution.Helpers
```
This function helps you use data loader in a direct way within your schema.
While normally the `dataloader/1,2,3` helpers are enough, `on_load/2` is useful
when you want to load multiple things in a single resolver, or when you need
fine grained control over the dataloader cache.
## Examples
```elixir
field :reports, list_of(:report) do
resolve fn shipment, _, %{context: %{loader: loader}} ->
loader
|> Dataloader.load(SourceName, :automatic_reports, shipment)
|> Dataloader.load(SourceName, :manual_reports, shipment)
|> on_load(fn loader ->
reports =
loader
|> Dataloader.get(SourceName, :automatic_reports, shipment)
|> Enum.concat(Dataloader.get(loader, SourceName, :manual_reports, shipment))
|> Enum.sort_by(&reported_at/1)
{:ok, reports}
end)
end
end
```
"""
def on_load(loader, fun) do
{:middleware, Absinthe.Middleware.Dataloader, {loader, fun}}
end
@type dataloader_tuple :: {:middleware, Absinthe.Middleware.Dataloader, term}
@type dataloader_key_fun ::
(Absinthe.Resolution.source(),
Absinthe.Resolution.arguments(),
Absinthe.Resolution.t() ->
{any, map})
@type dataloader_opt ::
{:args, map}
| {:use_parent, true | false}
| {:callback, (map(), map(), map() -> any())}
@doc """
Resolve a field with a dataloader source.
This function is not imported by default. To make it available in your module do
```
import Absinthe.Resolution.Helpers
```
Same as `dataloader/3`, but it infers the resource name from the field name.
## Examples
```
field :author, :user, resolve: dataloader(Blog)
```
This is identical to doing the following.
```
field :author, :user, resolve: dataloader(Blog, :author, [])
```
"""
@spec dataloader(Dataloader.source_name()) :: dataloader_key_fun()
def dataloader(source) do
dataloader(source, [])
end
@doc """
Resolve a field with a dataloader source.
This function is not imported by default. To make it available in your module do
```
import Absinthe.Resolution.Helpers
```
Same as `dataloader/3`, but it infers the resource name from the field name. For `opts` see
`dataloader/3` on what options can be passed in.
## Examples
```
object :user do
field :posts, list_of(:post),
resolve: dataloader(Blog, args: %{deleted: false})
field :organization, :organization do
resolve dataloader(Accounts, use_parent: false)
end
field(:account_active, non_null(:boolean), resolve: dataloader(
Accounts, callback: fn account, _parent, _args ->
{:ok, account.active}
end
)
)
end
```
"""
@spec dataloader(Dataloader.source_name(), [dataloader_opt]) :: dataloader_key_fun()
def dataloader(source, opts) when is_list(opts) do
fn parent, args, %{context: %{loader: loader}} = res ->
resource = res.definition.schema_node.identifier
do_dataloader(loader, source, resource, args, parent, opts)
end
end
@doc """
Resolve a field with Dataloader
This function is not imported by default. To make it available in your module do
```
import Absinthe.Resolution.Helpers
```
While `on_load/2` makes using dataloader directly easy within a resolver function,
it is often unnecessary to need this level of direct control.
The `dataloader/3` function exists to provide a simple API for using dataloader.
It takes the name of a data source, the name of the resource you want to load,
and then a variety of options.
## Basic Usage
```
object :user do
field :posts, list_of(:post),
resolve: dataloader(Blog, :posts, args: %{deleted: false})
field :organization, :organization do
resolve dataloader(Accounts, :organization, use_parent: false)
end
field(:account_active, non_null(:boolean), resolve: dataloader(
Accounts, :account, callback: fn account, _parent, _args ->
{:ok, account.active}
end
)
)
end
```
## Key Functions
Instead of passing in a literal like `:posts` or `:organization` in as the resource,
it is also possible pass in a function:
```
object :user do
field :posts, list_of(:post) do
arg :limit, non_null(:integer)
resolve dataloader(Blog, fn user, args, info ->
args = Map.update!(args, :limit, fn val ->
max(min(val, 20), 0)
end)
{:posts, args}
end)
end
end
```
In this case we want to make sure that the limit value cannot be larger than
`20`. By passing a callback function to `dataloader/2` we can ensure that
the value will fall nicely between 0 and 20.
## Options
- `:args` default: `%{}`. Any arguments you want to always pass into the
`Dataloader.load/4` call. Resolver arguments are merged into this value and,
in the event of a conflict, the resolver arguments win.
- `:callback` default: return result wrapped in ok or error tuple.
Callback that is run with result of dataloader. It receives the result as
the first argument, and the parent and args as second and third. Can be used
to e.g. compute fields on the return value of the loader. Should return an
ok or error tuple.
- `:use_parent` default: `false`. This option affects whether or not the `dataloader/2`
helper will use any pre-existing value on the parent. IE if you return
`%{author: %User{...}}` from a blog post the helper will by default simply use
the pre-existing author. Set it to true if you want to opt into using the
pre-existing value instead of loading it fresh.
Ultimately, this helper calls `Dataloader.load/4`
using the loader in your context, the source you provide, the tuple `{resource, args}`
as the batch key, and then the parent value of the field
```
def dataloader(source_name, resource) do
fn parent, args, %{context: %{loader: loader}} ->
args = Map.merge(opts[:args] || %{}, args)
loader
|> Dataloader.load(source_name, {resource, args}, parent)
|> on_load(fn loader ->
{:ok, Dataloader.get(loader, source_name, {resource, args}, parent)}
end)
end
end
```
"""
def dataloader(source, fun, opts \\ [])
@spec dataloader(Dataloader.source_name(), dataloader_key_fun | any, [dataloader_opt]) ::
dataloader_key_fun
def dataloader(source, fun, opts) when is_function(fun, 3) do
fn parent, args, %{context: %{loader: loader}} = res ->
{resource, args} = fun.(parent, args, res)
do_dataloader(loader, source, resource, args, parent, opts)
end
end
def dataloader(source, resource, opts) do
fn parent, args, %{context: %{loader: loader}} ->
do_dataloader(loader, source, resource, args, parent, opts)
end
end
defp use_parent(loader, source, resource, parent, args, opts) when is_map(parent) do
with true <- Keyword.get(opts, :use_parent, false),
{:ok, val} <- Map.fetch(parent, resource) do
Dataloader.put(loader, source, {resource, args}, parent, val)
else
_ -> loader
end
end
defp use_parent(loader, _source, _resource, _parent, _args, _opts), do: loader
defp do_dataloader(loader, source, resource, args, parent, opts) do
args =
opts
|> Keyword.get(:args, %{})
|> Map.merge(args)
loader
|> use_parent(source, resource, parent, args, opts)
|> Dataloader.load(source, {resource, args}, parent)
|> on_load(fn loader ->
callback = Keyword.get(opts, :callback, default_callback(loader))
loader
|> Dataloader.get(source, {resource, args}, parent)
|> callback.(parent, args)
end)
end
defp default_callback(%{options: loader_options}) do
if loader_options[:get_policy] == :tuples do
fn result, _parent, _args -> result end
else
fn result, _parent, _args -> {:ok, result} end
end
end
end
end
|
lib/absinthe/resolution/helpers.ex
| 0.905431
| 0.769102
|
helpers.ex
|
starcoder
|
defmodule Ash.OptionsHelpers do
@type schema :: NimbleOptions.schema()
@moduledoc false
def merge_schemas(left, right, section \\ nil) do
new_right =
Enum.map(right, fn {key, value} ->
{key, Keyword.put(value, :subsection, section)}
end)
Keyword.merge(left, new_right)
end
def validate(opts, schema) do
NimbleOptions.validate(opts, sanitize_schema(schema))
end
def validate!(opts, schema) do
NimbleOptions.validate!(opts, sanitize_schema(schema))
end
def docs(schema) do
NimbleOptions.docs(sanitize_schema(schema))
end
defp sanitize_schema(schema) do
Enum.map(schema, fn {key, opts} ->
new_opts =
case opts[:type] do
{:one_of, values} ->
Keyword.put(opts, :type, {:in, values})
_ ->
opts
end
{key, new_opts}
end)
end
def map(value) when is_map(value), do: {:ok, value}
def map(_), do: {:error, "must be a map"}
def ash_type(type) do
type = Ash.Type.get_type(type)
if Ash.Type.ash_type?(type) do
{:ok, type}
else
{:error, "Attribute type must be a built in type or a type module, got: #{inspect(type)}"}
end
end
def list_of_atoms(value) do
if is_list(value) and Enum.all?(value, &is_atom/1) do
{:ok, value}
else
{:error, "Expected a list of atoms"}
end
end
def module_and_opts({module, opts}) when is_atom(module) do
if Keyword.keyword?(opts) do
{:ok, {module, opts}}
else
{:error, "Expected the second element to be a keyword list, got: #{inspect(opts)}"}
end
end
def module_and_opts({other, _}) do
{:error, "Expected the first element to be a module, got: #{inspect(other)}"}
end
def module_and_opts(module) do
module_and_opts({module, []})
end
def default(value) when is_function(value, 0), do: {:ok, value}
def default({module, function, args})
when is_atom(module) and is_atom(function) and is_list(args),
do: {:ok, {module, function, args}}
def default(value), do: {:ok, value}
def make_required!(options, field) do
Keyword.update!(options, field, &Keyword.put(&1, :required, true))
end
def make_optional!(options, field) do
Keyword.update!(options, field, &Keyword.delete(&1, :required))
end
def set_type!(options, field, type) do
Keyword.update!(options, field, &Keyword.put(&1, :type, type))
end
def set_default!(options, field, value) do
Keyword.update!(options, field, fn config ->
config
|> Keyword.put(:default, value)
|> Keyword.delete(:required)
end)
end
def append_doc!(options, field, to_append) do
Keyword.update!(options, field, fn opt_config ->
Keyword.update(opt_config, :doc, to_append, fn existing ->
existing <> " - " <> to_append
end)
end)
end
end
|
lib/ash/options_helpers.ex
| 0.681939
| 0.408837
|
options_helpers.ex
|
starcoder
|
defmodule Membrane.RawVideo.Parser do
@moduledoc """
Simple module responsible for splitting the incoming buffers into
frames of raw (uncompressed) video frames of desired format.
The parser sends proper caps when moves to playing state.
No data analysis is done, this element simply ensures that
the resulting packets have proper size.
"""
use Membrane.Filter
alias Membrane.{Buffer, Payload}
alias Membrane.{RawVideo, RemoteStream}
def_input_pad :input, demand_unit: :bytes, demand_mode: :auto, caps: RemoteStream
def_output_pad :output, demand_mode: :auto, caps: {RawVideo, aligned: true}
def_options pixel_format: [
type: :atom,
spec: RawVideo.pixel_format_t(),
description: """
Format used to encode pixels of the video frame.
"""
],
width: [
spec: pos_integer(),
description: """
Width of a frame in pixels.
"""
],
height: [
spec: pos_integer(),
description: """
Height of a frame in pixels.
"""
],
framerate: [
type: :tuple,
spec: RawVideo.framerate_t(),
default: {0, 1},
description: """
Framerate of video stream. Passed forward in caps.
"""
]
@supported_formats [:I420, :I422, :I444, :RGB, :BGRA, :RGBA, :NV12, :NV21, :YV12, :AYUV]
@impl true
def handle_init(opts) do
unless opts.pixel_format in @supported_formats do
raise """
Unsupported frame pixel format: #{inspect(opts.pixel_format)}
The elements supports: #{Enum.map_join(@supported_formats, ", ", &inspect/1)}
"""
end
frame_size =
case RawVideo.frame_size(opts.pixel_format, opts.width, opts.height) do
{:ok, frame_size} ->
frame_size
{:error, :invalid_dimensions} ->
raise "Provided dimensions (#{opts.width}x#{opts.height}) are invalid for #{inspect(opts.pixel_format)} pixel format"
end
caps = %RawVideo{
pixel_format: opts.pixel_format,
width: opts.width,
height: opts.height,
framerate: opts.framerate,
aligned: true
}
{num, denom} = caps.framerate
frame_duration = if num == 0, do: 0, else: Ratio.new(denom * Membrane.Time.second(), num)
{:ok,
%{
caps: caps,
timestamp: 0,
frame_duration: frame_duration,
frame_size: frame_size,
queue: []
}}
end
@impl true
def handle_prepared_to_playing(_ctx, state) do
{{:ok, caps: {:output, state.caps}}, state}
end
@impl true
def handle_caps(:input, _caps, _ctx, state) do
# Do not forward caps
{:ok, state}
end
@impl true
def handle_process_list(:input, buffers, _ctx, state) do
%{frame_size: frame_size} = state
payload_iodata =
buffers |> Enum.map(fn %Buffer{payload: payload} -> Payload.to_binary(payload) end)
queue = [payload_iodata | state.queue]
size = IO.iodata_length(queue)
if size < frame_size do
{:ok, %{state | queue: queue}}
else
data_binary = queue |> Enum.reverse() |> IO.iodata_to_binary()
{payloads, tail} = Bunch.Binary.chunk_every_rem(data_binary, frame_size)
{bufs, state} =
payloads
|> Enum.map_reduce(state, fn payload, state_acc ->
timestamp = state_acc.timestamp |> Ratio.floor()
{%Buffer{payload: payload, pts: timestamp}, bump_timestamp(state_acc)}
end)
{{:ok, buffer: {:output, bufs}}, %{state | queue: [tail]}}
end
end
@impl true
def handle_prepared_to_stopped(_ctx, state) do
{:ok, %{state | queue: []}}
end
defp bump_timestamp(%{caps: %{framerate: {0, _}}} = state) do
state
end
defp bump_timestamp(state) do
use Ratio
%{timestamp: timestamp, frame_duration: frame_duration} = state
timestamp = timestamp + frame_duration
%{state | timestamp: timestamp}
end
end
|
lib/membrane_raw_video/parser.ex
| 0.877247
| 0.555918
|
parser.ex
|
starcoder
|
defmodule Stripe.Webhook do
@moduledoc """
Creates a Stripe Event from webhook's payload if signature is valid.
"""
@default_tolerance 300
@expected_scheme "v1"
@doc """
Verify webhook payload and return a Stripe event.
`payload` is the raw, unparsed content body sent by Stripe, which can be
retrieved with `Plug.Conn.read_body/2`. Note that `Plug.Parsers` will read
and discard the body, so you must implement a [custom body reader][1] if the
plug is located earlier in the pipeline.
`signature` is the value of `Stripe-Signature` header, which can be fetched
with `Plug.Conn.get_req_header/2`.
`secret` is your webhook endpoint's secret from the Stripe Dashboard.
`tolerance` is the allowed deviation in seconds from the current system time
to the timestamp found in `signature`. Defaults to 300 seconds (5 minutes).
Stripe API reference:
https://stripe.com/docs/webhooks/signatures#verify-manually
[1]: https://hexdocs.pm/plug/Plug.Parsers.html#module-custom-body-reader
## Example
case Stripe.Webhook.construct_event(payload, signature, secret) do
{:ok, %Stripe.Event{} = event} ->
# Return 200 to Stripe and handle event
{:error, reason} ->
# Reject webhook by responding with non-2XX
end
"""
@spec construct_event(String.t(), String.t(), String.t(), integer) ::
{:ok, Stripe.Event.t()} | {:error, any}
def construct_event(payload, signature_header, secret, tolerance \\ @default_tolerance) do
case verify_header(payload, signature_header, secret, tolerance) do
:ok ->
{:ok, convert_to_event!(payload)}
error ->
error
end
end
defp verify_header(payload, signature_header, secret, tolerance) do
case get_timestamp_and_signatures(signature_header, @expected_scheme) do
{nil, _} ->
{:error, "Unable to extract timestamp and signatures from header"}
{_, []} ->
{:error, "No signatures found with expected scheme #{@expected_scheme}"}
{timestamp, signatures} ->
with {:ok, timestamp} <- check_timestamp(timestamp, tolerance),
{:ok, _signatures} <- check_signatures(signatures, timestamp, payload, secret) do
:ok
else
{:error, error} -> {:error, error}
end
end
end
defp get_timestamp_and_signatures(signature_header, scheme) do
signature_header
|> String.split(",")
|> Enum.map(&String.split(&1, "="))
|> Enum.reduce({nil, []}, fn
["t", timestamp], {nil, signatures} ->
{to_integer(timestamp), signatures}
[^scheme, signature], {timestamp, signatures} ->
{timestamp, [signature | signatures]}
_, acc ->
acc
end)
end
defp to_integer(timestamp) do
case Integer.parse(timestamp) do
{timestamp, _} ->
timestamp
:error ->
nil
end
end
defp check_timestamp(timestamp, tolerance) do
now = System.system_time(:second)
tolerance_zone = now - tolerance
if timestamp < tolerance_zone do
{:error, "Timestamp outside the tolerance zone (#{now})"}
else
{:ok, timestamp}
end
end
defp check_signatures(signatures, timestamp, payload, secret) do
signed_payload = "#{timestamp}.#{payload}"
expected_signature = compute_signature(signed_payload, secret)
if Enum.any?(signatures, &secure_equals?(&1, expected_signature)) do
{:ok, signatures}
else
{:error, "No signatures found matching the expected signature for payload"}
end
end
defp compute_signature(payload, secret) do
:crypto.mac(:hmac, :sha256, secret, payload)
|> Base.encode16(case: :lower)
end
defp secure_equals?(input, expected) when byte_size(input) == byte_size(expected) do
input = String.to_charlist(input)
expected = String.to_charlist(expected)
secure_compare(input, expected)
end
defp secure_equals?(_, _), do: false
defp secure_compare(acc \\ 0, input, expected)
defp secure_compare(acc, [], []), do: acc == 0
defp secure_compare(acc, [input_codepoint | input], [expected_codepoint | expected]) do
import Bitwise
acc
|> bor(input_codepoint ^^^ expected_codepoint)
|> secure_compare(input, expected)
end
defp convert_to_event!(payload) do
payload
|> Poison.decode!()
|> Stripe.Converter.convert_result()
end
end
|
lib/stripe/webhook.ex
| 0.892331
| 0.60577
|
webhook.ex
|
starcoder
|
defmodule Forage.Codec.Decoder do
@moduledoc """
Functionality to decode a Phoenix `params` map into a form suitable for use
with the query builders and pagination libraries
"""
alias Forage.Codec.Exceptions.InvalidAssocError
alias Forage.Codec.Exceptions.InvalidFieldError
alias Forage.Codec.Exceptions.InvalidSortDirectionError
alias Forage.Codec.Exceptions.InvalidPaginationDataError
alias Forage.ForagePlan
@type schema() :: atom()
@type assoc() :: {schema(), atom(), atom()}
@doc """
Encodes a params map into a forage plan (`Forage.ForagerPlan`).
"""
def decode(params, schema) do
search = decode_search(params, schema)
sort = decode_sort(params, schema)
pagination = decode_pagination(params, schema)
%ForagePlan{search: search, sort: sort, pagination: pagination}
end
@doc """
Extract and decode the search filters from the `params` map into a list of filters.
"""
def decode_search(%{"_search" => search_data}, schema) do
decoded_fields =
for {field_string, %{"op" => op, "val" => val}} <- search_data do
field_or_assoc = decode_field_or_assoc(field_string, schema)
[field: field_or_assoc, operator: op, value: val]
end
Enum.sort(decoded_fields)
end
def decode_search(_params, _schema), do: []
def decode_field_or_assoc(field_string, schema) do
parts = String.split(field_string, ".")
case parts do
[field_name] ->
field = safe_field_name_to_atom!(field_name, schema)
{:simple, field}
[local_name, remote_name] ->
assoc = safe_field_names_to_assoc!(local_name, remote_name, schema)
{:assoc, assoc}
_ ->
raise ArgumentError, "Invalid field name '#{field_string}'."
end
end
@doc """
Extract and decode the sort fields from the `params` map into a keyword list.
"""
def decode_sort(%{"_sort" => sort}, schema) do
# TODO: make this more robust
decoded =
for {field_name, %{"direction" => direction}} <- sort do
field_atom = safe_field_name_to_atom!(field_name, schema)
direction = decode_direction(direction)
[field: field_atom, direction: direction]
end
# Sort the result so that the order is always the same
Enum.sort(decoded)
end
def decode_sort(_params, _schema), do: []
@doc """
Extract and decode the pagination data from the `params` map into a keyword list.
"""
def decode_pagination(%{"_pagination" => pagination}, _schema) do
decoded_after =
case pagination["after"] do
nil -> []
after_ -> [after: after_]
end
decoded_before =
case pagination["before"] do
nil -> []
before -> [before: before]
end
decoded_after ++ decoded_before
end
def decode_pagination(_params, _schema), do: []
@spec decode_direction(String.t() | nil) :: atom() | nil
defp decode_direction("asc"), do: :asc
defp decode_direction("desc"), do: :desc
defp decode_direction(nil), do: nil
defp decode_direction(value), do: raise(InvalidSortDirectionError, value)
def pagination_data_to_integer!(value) do
try do
String.to_integer(value)
rescue
ArgumentError -> raise InvalidPaginationDataError, value
end
end
@spec safe_field_names_to_assoc!(String.t(), String.t(), atom()) :: assoc()
def safe_field_names_to_assoc!(local_name, remote_name, local_schema) do
local = safe_assoc_name_to_atom!(local_name, local_schema)
remote_schema = local_schema.__schema__(:association, local).related
remote = safe_field_name_to_atom!(remote_name, remote_schema)
{remote_schema, local, remote}
end
def remote_schema(local_name, local_schema) do
local = safe_assoc_name_to_atom!(local_name, local_schema)
remote_schema = local_schema.__schema__(:association, local).related
remote_schema
end
@doc false
@spec safe_assoc_name_to_atom!(String.t(), schema()) :: atom()
def safe_assoc_name_to_atom!(assoc_name, schema) do
# This function performs the dangerous job of turning a string into an atom.
schema_associations = schema.__schema__(:associations)
found = Enum.find(schema_associations, fn assoc -> assoc_name == Atom.to_string(assoc) end)
case found do
nil ->
raise InvalidAssocError, {schema, assoc_name}
_ ->
found
end
end
@doc false
@spec safe_field_name_to_atom!(String.t(), schema()) :: atom()
def safe_field_name_to_atom!(field_name, schema) do
# This function performs the dangerous job of turning a string into an atom.
# Because the atom table on the BEAM is limited, there is a limit of atoms that can exist.
# This means generating atoms at runtime is very dangerous,
# especially if they're being generated from user input.
# The whole goal of `forage` is to generate process "raw" (i.e. untrusted) user input,
# so we must be especially careful.
# Using `String.to_atom()` is completely out of the question.
# Using `String.to_existing_atom()` is a possibility, but we have chosen to do it in another way.
# Instead of turning the string into an atom, we iterate over the schema fields,
# convert them into strings and check the strings for equality.
# When we find a match, we return the atom.
schema_fields = schema.__schema__(:fields)
found = Enum.find(schema_fields, fn field -> field_name == Atom.to_string(field) end)
case found do
nil ->
raise InvalidFieldError, {schema, field_name}
_ ->
found
end
end
end
|
lib/forage/codec/decoder.ex
| 0.77949
| 0.578061
|
decoder.ex
|
starcoder
|
defmodule Chess.Utils do
@moduledoc """
"""
alias Chess.{Figure}
defmacro __using__(_opts) do
quote do
defp coordinates(move_from), do: String.split(move_from, "", trim: true)
defp opponent("w"), do: "b"
defp opponent(_), do: "w"
defp define_active_figures(squares, active) do
squares
|> Stream.filter(fn {_, %Figure{color: color}} -> color == active end)
|> calc_attacked_squares(squares, "attack")
end
defp calc_attacked_squares(figures, squares, type) do
Stream.map(figures, fn x ->
{
x,
check_attacked_squares(squares, x, type) |> List.flatten()
}
end)
end
defp define_attackers(active_figures, king_square) do
Enum.filter(active_figures, fn {_, squares} -> king_square in squares end)
end
defp check_attacked_squares(squares, {square, %Figure{type: type}}, _) when type in ["k", "q"] do
check_diagonal_moves(squares, convert_to_indexes(square), type) ++ check_linear_moves(squares, convert_to_indexes(square), type)
end
defp check_attacked_squares(squares, {square, %Figure{type: "b"}}, _) do
check_diagonal_moves(squares, convert_to_indexes(square), "b")
end
defp check_attacked_squares(squares, {square, %Figure{type: "r"}}, _) do
check_linear_moves(squares, convert_to_indexes(square), "r")
end
defp check_attacked_squares(squares, {square, %Figure{type: "n"}}, _) do
check_knight_moves(squares, convert_to_indexes(square))
end
defp check_attacked_squares(squares, {square, %Figure{color: color, type: "p"}}, "attack") do
check_pion_attack_moves(squares, convert_to_indexes(square), color)
end
defp check_attacked_squares(squares, {square, %Figure{color: color, type: "p"}}, "block") do
check_pion_moves(squares, convert_to_indexes(square), color)
end
defp convert_to_indexes(square) do
square = Atom.to_string(square)
x_square_index = String.first(square)
y_square_index = String.last(square)
[
Enum.find_index(Chess.x_fields, fn x -> x == x_square_index end),
Enum.find_index(Chess.y_fields, fn x -> x == y_square_index end)
]
end
defp check_diagonal_moves(squares, square, "k") do
Enum.map(Chess.diagonals, fn route -> check_attacked_square(squares, square, route, 1, 1, []) end)
end
defp check_diagonal_moves(squares, square, _) do
Enum.map(Chess.diagonals, fn route -> check_attacked_square(squares, square, route, 1, 7, []) end)
end
defp check_linear_moves(squares, square, "k") do
Enum.map(Chess.linears, fn route -> check_attacked_square(squares, square, route, 1, 1, []) end)
end
defp check_linear_moves(squares, square, _) do
Enum.map(Chess.linears, fn route -> check_attacked_square(squares, square, route, 1, 7, []) end)
end
defp check_knight_moves(squares, square) do
Enum.map(Chess.knights, fn route -> check_attacked_square(squares, square, route, 1, 1, []) end)
end
defp check_pion_attack_moves(squares, square, color) do
routes = if color == "w", do: Chess.white_pions, else: Chess.black_pions
Enum.map(routes, fn route -> check_attacked_square(squares, square, route, 1, 1, []) end)
end
defp check_pion_moves(squares, square, color) do
routes = if color == "w", do: Chess.white_pions_moves, else: Chess.black_pions_moves
Enum.map(routes, fn route -> check_attacked_square(squares, square, route, 1, 1, []) end)
end
defp check_attacked_square(squares, [x_index, y_index], [x_route, y_route], current_step, limit, acc) when current_step <= limit do
x_square_index = x_index + x_route * current_step
y_square_index = y_index + y_route * current_step
if x_square_index in Chess.indexes && y_square_index in Chess.indexes do
square = :"#{Enum.at(Chess.x_fields, x_square_index)}#{Enum.at(Chess.y_fields, y_square_index)}"
acc = [square | acc]
# check barriers on the route
if Keyword.has_key?(squares, square) do
%Figure{type: type} = squares[square]
case type do
# calculate attacked squares behind king
"k" -> check_attacked_square(squares, [x_index, y_index], [x_route, y_route], current_step + 1, limit, acc)
# stop calculating attacked squares
_ -> check_attacked_square(squares, [x_index, y_index], [x_route, y_route], limit + 1, limit, acc)
end
else
# add empty field to attacked squares
check_attacked_square(squares, [x_index, y_index], [x_route, y_route], current_step + 1, limit, acc)
end
else
# stop calculating attacked squares
check_attacked_square(squares, [x_index, y_index], [x_route, y_route], limit + 1, limit, acc)
end
end
defp check_attacked_square(_, _, _, current_step, limit, acc) when current_step > limit, do: acc
end
end
end
|
lib/chess/utils/utils.ex
| 0.729327
| 0.625824
|
utils.ex
|
starcoder
|
defmodule Kitt.Message.PSM do
@moduledoc """
Defines the structure and instantiation function
for creating a J2735-compliant PersonalSafetyMessage.
A `PSM` defines the information exchanged between non-vehicle
actors within a DSRC-capable environment and the vehicles
and infrastructure of the environment
"""
@typedoc "Defines the structure of a PersonalSafetyMessage and the data elements comprising its fields"
@type t :: %__MODULE__{
accelSet: Kitt.Types.acceleration_set_4_way(),
accuracy: Kitt.Types.positional_accuracy(),
activityType: activity(),
activitySubType: sub_activity(),
assistType: assistance(),
attachment:
:unavailable
| :stroller
| :bicycleTrailer
| :cart
| :wheelchair
| :otherWalkAssistAttachments
| :pet
| {:asn1_enum, integer()},
attachmentRadius: non_neg_integer(),
basicType:
:unavailable
| :aPEDESTRIAN
| :aPEDALCYCLIST
| :aPUBLICSAFETYWORKER
| :anANIMAL
| {:asn1_enum, non_neg_integer()},
clusterRadius: non_neg_integer(),
clusterSize: :unavailable | :small | :medium | :large | {:asn1_enum, non_neg_integer()},
crossRequest: boolean(),
crossState: boolean(),
eventResponderType:
:unavailable
| :towOperator
| :fireAndEMSWorker
| :aDOTWorker
| :lawEnforcement
| :hazmatResponder
| :animalControlWorker
| :otherPersonnel
| {:asn1_enum, non_neg_integer()},
heading: non_neg_integer(),
id: non_neg_integer(),
msgCnt: non_neg_integer(),
pathHistory: Kitt.Types.path_history(),
pathPrediction: Kitt.Types.path_prediction(),
position: Kitt.Types.position_3d(),
propulsion:
{:human,
:unavailable | :otherTypes | :onFoot | :skateBoard | :pushOrKickScooter | :wheelchair}
| {:animal, :unavailable | :otherTypes | :animalMounted | :animalDrawnCarriage}
| {:motor,
:unavailable
| :otherTypes
| :wheelChair
| :bicycle
| :scooter
| :selfBalancingDevice},
regional: [Kitt.Types.regional_extension()],
secMark: non_neg_integer(),
sizing: sizing(),
speed: non_neg_integer(),
useState: use_state()
}
@type activity ::
:unavailable
| :workingOnRoad
| :settingUpClosures
| :respondingToEvents
| :directingTraffic
| :otherActivities
@type sub_activity ::
:unavailable
| :policeAndTrafficOfficers
| :trafficControlPersons
| :railroadCrossingGuards
| :civilDefenseNationalGuardMilitaryPolice
| :emergencyOrganizationPersonnel
| :highwayServiceVehiclePersonnel
@type assistance ::
:unavailable
| :otherType
| :vision
| :hearing
| :movement
| :cognitition
@type sizing ::
:unavailable
| :smallStature
| :largeStature
| :erraticMoving
| :slowMoving
@type use_state ::
:unavailable
| :other
| :idle
| :listeningToAudio
| :typing
| :calling
| :playingGames
| :reading
| :viewing
@derive Jason.Encoder
@enforce_keys [:accuracy, :basicType, :heading, :id, :msgCnt, :position, :secMark, :speed]
defstruct [
:accelSet,
:accuracy,
:activityType,
:activitySubType,
:assistType,
:attachment,
:attachmentRadius,
:basicType,
:clusterRadius,
:clusterSize,
:crossRequest,
:crossState,
:eventResponderType,
:heading,
:id,
:msgCnt,
:pathHistory,
:pathPrediction,
:position,
:propulsion,
:regional,
:secMark,
:sizing,
:speed,
:useState,
:useState
]
@doc """
Produces the `PSM` message struct from an equivalent map or keyword input
"""
@spec new(map() | keyword()) :: t()
def new(message), do: struct(__MODULE__, message)
@doc """
Returns the `PSM` identifying integer
"""
@spec type_id() :: non_neg_integer()
def type_id(), do: :DSRC.personalSafetyMessage()
@doc """
Returns the `PSM` identifying atom recognized by the ASN1 spec
"""
@spec type() :: atom()
def type(), do: :PersonalSafetyMessage
end
|
lib/kitt/message/psm.ex
| 0.782538
| 0.673051
|
psm.ex
|
starcoder
|
defmodule Grizzly.SmartStart.MetaExtension.UUID16 do
@moduledoc """
This is used to advertise 16 bytes of manufactured-defined information that
is unique for a given product.
Z-Wave UUIDs are not limited to the format outlined in RFC 4122 but can also
be ASCII characters and a relevant prefix.
"""
@typedoc """
The three formats that the Z-Wave UUID can be formatted in are `:ascii`,
`:hex`, or `:rfc4122`.
Both `:ascii` and `:hex` can also have the prefix `sn:` or `UUID:`.
Valid `:hex` formatted UUIDs look like:
- `0102030405060708090A141516171819`
- `sn:0102030405060708090A141516171819`
- `UUID:0102030405060708090A141516171819`
Valid `:ascii` formatted UUIDs look like:
- `Hello Elixir!!!`
- `sn:Hello Elixir!!!`
- `UUID:Hello Elixir!!!`
Lastly `rfc4122` format looks like `58D5E212-165B-4CA0-909B-C86B9CEE0111`
where every two digits make up one hex value.
More information about RFC 4122 and the specification format can be
found [here](https://tools.ietf.org/html/rfc4122#section-4.1.2).
"""
@behaviour Grizzly.SmartStart.MetaExtension
@type format :: :ascii | :hex | :rfc4122
@type t :: %__MODULE__{
uuid: String.t(),
format: format()
}
@enforce_keys [:uuid, :format]
defstruct uuid: nil, format: :hex
defguardp is_format_hex(value) when value in [0, 2, 4]
defguardp is_format_ascii(value) when value in [1, 3, 5]
defguardp is_format_rfc4122(value) when value == 6
@doc """
Make a new `UUID16.t()`
"""
@spec new(String.t(), format()) :: {:ok, t()} | {:error, :invalid_uuid_length | :invalid_format}
def new(uuid, format) do
with :ok <- validate_format(format),
uuid_no_prefix = remove_uuid_prefix(uuid),
:ok <- validate_uuid_length(uuid_no_prefix, format) do
{:ok, %__MODULE__{uuid: uuid, format: format}}
end
end
@doc """
Take a binary string and try to make a `UUID16.t()` from it
If the critical bit is set in teh binary this will return
`{:error, :critical_bit_set}` and the information should be ignored.
If the format in the binary is not part of the defined Z-Wave specification
this will return `{:error, :invalid_format}`
"""
@impl Grizzly.SmartStart.MetaExtension
@spec from_binary(binary()) ::
{:ok, t()} | {:error, :critical_bit_set | :invalid_format | :invalid_binary}
def from_binary(<<0x03::size(7), 0x00::size(1), 0x11, presentation_format, uuid::binary>>) do
with {:ok, uuid_string} <- uuid_from_binary(presentation_format, uuid),
{:ok, format} <- format_from_byte(presentation_format) do
new(uuid_string, format)
end
end
def from_binary(<<0x03::size(7), 0x01::size(1), _rest::binary>>) do
{:error, :critical_bit_set}
end
def from_binary(bin) when is_binary(bin), do: {:error, :invalid_binary}
@doc """
Make a binary string from a `UUID16.t()`
"""
@impl Grizzly.SmartStart.MetaExtension
@spec to_binary(t()) :: {:ok, binary()}
def to_binary(%__MODULE__{uuid: uuid, format: format}) do
[format_prefix, uuid] = get_format_prefix_and_uuid(uuid)
uuid_binary = uuid_to_binary(uuid, format)
{:ok, <<0x06, 0x11, format_to_byte(format, format_prefix)>> <> uuid_binary}
end
defp get_format_prefix_and_uuid(uuid_string) do
case String.split(uuid_string, ":") do
[uuid] -> [:none, uuid]
[prefix, _uuid] = result when prefix in ["sn", "UUID"] -> result
end
end
defp format_from_byte(format_byte) when is_format_hex(format_byte), do: {:ok, :hex}
defp format_from_byte(format_byte) when is_format_ascii(format_byte), do: {:ok, :ascii}
defp format_from_byte(format_byte) when is_format_rfc4122(format_byte), do: {:ok, :rfc4122}
defp format_from_byte(format_byte) when format_byte in 7..99, do: {:ok, :hex}
defp format_from_byte(_), do: {:error, :invalid_format}
defp format_to_byte(:hex, :none), do: 0
defp format_to_byte(:hex, "sn"), do: 2
defp format_to_byte(:hex, "UUID"), do: 4
defp format_to_byte(:ascii, :none), do: 1
defp format_to_byte(:ascii, "sn"), do: 3
defp format_to_byte(:ascii, "UUID"), do: 5
defp format_to_byte(:rfc4122, :none), do: 6
defp uuid_to_binary(uuid, :hex) do
hex_uuid_to_binary(uuid, <<>>)
end
defp uuid_to_binary(uuid, :ascii) do
ascii_uuid_to_binary(uuid)
end
defp uuid_to_binary(uuid, :rfc4122) do
rfc4122_uuid_to_binary(uuid)
end
defp uuid_to_binary(_uuid, _format), do: {:error, :invalid_uuid_length}
defp rfc4122_uuid_to_binary(uuid) do
uuid
|> String.split("-")
|> Enum.flat_map(&String.split(&1, "", trim: true))
|> Enum.chunk_every(2)
|> Enum.map(fn digits ->
digits
|> Enum.join("")
|> String.to_integer(16)
end)
|> :erlang.list_to_binary()
end
defp hex_uuid_to_binary("", binary) do
binary
end
defp hex_uuid_to_binary(uuid, binary) do
{digit, digits} = String.split_at(uuid, 2)
byte = String.to_integer(digit, 16)
hex_uuid_to_binary(digits, binary <> <<byte>>)
end
defp ascii_uuid_to_binary(uuid_string) do
uuid_string
|> String.split("", trim: true)
|> Enum.reduce(<<>>, &(&2 <> &1))
end
defp uuid_as_hex_digits(uuid) do
hex_digits_as_string(uuid)
end
defp uuid_as_ascii(uuid) do
uuid_out_string =
uuid
|> to_charlist()
|> to_string()
uuid_out_string
end
defp uuid_from_binary(format, uuid) when is_format_hex(format) do
formatted_uuid = uuid_as_hex_digits(uuid)
case format do
0 -> {:ok, formatted_uuid}
2 -> {:ok, "sn:#{formatted_uuid}"}
4 -> {:ok, "UUID:#{formatted_uuid}"}
end
end
defp uuid_from_binary(format, uuid) when is_format_ascii(format) do
formatted_uuid = uuid_as_ascii(uuid)
case format do
1 -> {:ok, formatted_uuid}
3 -> {:ok, "sn:#{formatted_uuid}"}
5 -> {:ok, "UUID:#{formatted_uuid}"}
end
end
defp uuid_from_binary(
6,
<<time_low::binary-size(4), time_mid::binary-size(2),
time_hi_and_version::binary-size(2), clock_seq::binary-size(2), node::binary-size(6)>>
) do
formatted =
[
time_low,
time_mid,
time_hi_and_version,
clock_seq,
node
]
|> Enum.map(&hex_digits_as_string/1)
|> Enum.join("-")
{:ok, formatted}
end
defp uuid_from_binary(format, uuid) when format in 7..99 do
uuid_from_binary(0, uuid)
end
defp hex_digits_as_string(binary) do
list = :erlang.binary_to_list(binary)
Enum.reduce(list, "", fn integer, uuid_string ->
if integer < 16 do
uuid_string <> "0" <> Integer.to_string(integer, 16)
else
uuid_string <> Integer.to_string(integer, 16)
end
end)
end
defp validate_format(format) when format in [:hex, :ascii, :rfc4122], do: :ok
defp validate_format(_format), do: {:error, :invalid_format}
defp remove_uuid_prefix("sn:" <> uuid), do: uuid
defp remove_uuid_prefix("UUID:" <> uuid), do: uuid
defp remove_uuid_prefix(uuid), do: uuid
defp validate_uuid_length(uuid, :hex) when byte_size(uuid) == 32, do: :ok
defp validate_uuid_length(uuid, :ascii) when byte_size(uuid) == 16, do: :ok
defp validate_uuid_length(uuid, :rfc4122) when byte_size(uuid) == 36, do: :ok
defp validate_uuid_length(_uuid, _format), do: {:error, :invalid_uuid_length}
end
|
lib/grizzly/smart_start/meta_extension/uuid16.ex
| 0.88631
| 0.590012
|
uuid16.ex
|
starcoder
|
defmodule Advent20.GameConsole do
@moduledoc """
Day 8: Handheld Halting
"""
defp parse_input(input) do
input
|> String.split("\n", trim: true)
|> Stream.map(&Regex.run(~r/(.{3}) (.+)$/, &1, capture: :all_but_first))
|> Stream.map(fn [instruction, string_value] -> {instruction, String.to_integer(string_value)} end)
|> Stream.with_index()
|> Stream.map(fn {value, index} -> {index, value} end)
|> Enum.into(%{})
end
@doc """
Part 1: Run your copy of the boot code. Immediately before any instruction
is executed a second time, what value is in the accumulator?
"""
def acc_value_after_first_loop(input) do
state = %{pointer: 0, acc: 0}
{:loop, acc, _} =
input
|> parse_input()
|> run_program(state, MapSet.new())
acc
end
@doc """
Part 2: What is the value of the accumulator after the program terminates?
"""
def acc_value_at_program_termination(input) do
state = %{pointer: 0, acc: 0}
parsed_input = parse_input(input)
# We know that the final instruction is the next instruction right after the boot code
final_instruction = parsed_input |> Map.keys() |> Enum.max() |> Kernel.+(1)
# Run the program once to get a trace of all instructions before the program loops
{:loop, _acc, trace} =
input
|> parse_input()
|> run_program(state, MapSet.new())
# Generate all alternate versions of the program, find the one that terminates
trace
|> Stream.flat_map(fn pointer ->
case Map.fetch!(parsed_input, pointer) do
{"jmp", value} -> [Map.put(parsed_input, pointer, {"nop", value})]
{"nop", value} -> [Map.put(parsed_input, pointer, {"jmp", value})]
{"acc", _value} -> []
end
end)
|> Stream.map(fn input -> run_program(input, state, MapSet.new()) end)
|> Enum.find_value(fn
{:termination, acc, ^final_instruction} -> acc
{:loop, _, _} -> false
end)
end
defp run_program(input, state, executed, trace \\ []) do
with {:looping?, false} <- {:looping?, MapSet.member?(executed, state.pointer)},
{:get_instruction, {:ok, instruction}} <- {:get_instruction, Map.fetch(input, state.pointer)} do
executed = MapSet.put(executed, state.pointer)
trace = [state.pointer | trace]
state = apply_instruction(state, instruction)
run_program(input, state, executed, trace)
else
{:looping?, true} -> {:loop, state.acc, trace}
{:get_instruction, :error} -> {:termination, state.acc, state.pointer}
end
end
defp apply_instruction(%{pointer: pointer} = state, {"nop", _}), do: %{state | pointer: pointer + 1}
defp apply_instruction(%{pointer: pointer} = state, {"jmp", jmp}), do: %{state | pointer: pointer + jmp}
defp apply_instruction(%{pointer: pointer, acc: acc} = state, {"acc", value}) do
%{state | pointer: pointer + 1, acc: acc + value}
end
end
|
lib/advent20/08_game_console.ex
| 0.755096
| 0.586671
|
08_game_console.ex
|
starcoder
|
defmodule Model.Stop do
@moduledoc """
Stop represents a physical location where the transit system can pick up or drop off passengers. See
[GTFS `stops.txt`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt)
"""
use Recordable, [
:id,
:name,
:description,
:address,
:platform_code,
:platform_name,
:latitude,
:longitude,
:parent_station,
:zone_id,
:municipality,
:on_street,
:at_street,
:vehicle_type,
wheelchair_boarding: 0,
location_type: 0
]
alias Model.WGS84
@type id :: String.t()
@typedoc """
The meaning of `wheelchair_boarding` varies based on whether this is a stop or station.
## Indepent Stop or Parent Station
| Value | Vehicles with wheelchair boarding | Meaning |
|-------|-----------------------------------|---------|
| `0` | N/A | No accessibility information is available |
| `1` | >= 1 | At least some vehicles at this stop can be boarded by a rider in a wheelchair |
| `2` | 0 | Wheelchair boarding is not possible at this stop |
## Stop/Platform at a Parent Station
| Value | Wheelchair accessible paths | Meaning |
|-------|-----------------------------|---------|
| `0` | Inherit | Inherit from parent station |
| `1` | 1 | There exists some accessible path from outside the station to the specific stop |
| `2` | 0 | There exists no accessible path from outside the station to the specific stop / platform |
See [GTFS `stops.txt` `wheelchair_boarding`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt).
"""
@type wheelchair_boarding :: 0..2
@typedoc """
| Value | Type | Description |
| - | - | - |
| `0` | Stop | A location where passengers board or disembark from a transit vehicle. |
| `1` | Station | A physical structure or area that contains one or more stops. |
| `2` | Station Entrance/Exit | A location where passengers can enter or exit a station from the street. The stop entry must also specify a parent_station value referencing the stop ID of the parent station for the entrance. |
"""
@type location_type :: 0..2
@typedoc """
* `:id` - the unique ID for this stop. See [GTFS `stops.txt` `stop_id`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt).
* `:name` - Name of a stop, station, or station entrance in the local and tourist vernacular. See [GTFS `stops.txt` `stop_name`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt)
* `:description` - Description of the stop. See [GTFS `stops.txt` `stop_desc`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt).
* `:address` - A street address for the station. See [MBTA extensions to GTFS](https://docs.google.com/document/d/1RoQQj3_-7FkUlzFP4RcK1GzqyHp4An2lTFtcmW0wrqw/view).
* `:platform_code` - A short code representing the platform/track (like a number or letter). See [GTFS `stops.txt` `platform_code`](https://developers.google.com/transit/gtfs/reference/gtfs-extensions#stopstxt_1).
* `:platform_name` - A textual description of the platform or track. See [MBTA extensions to GTFS](https://docs.google.com/document/d/1RoQQj3_-7FkUlzFP4RcK1GzqyHp4An2lTFtcmW0wrqw/view).
* `:latitude` - Latitude of the stop or station. See
[GTFS `stops.txt` `stop_lat`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt).
* `:longitude` - Longitude of the stop or station. See
[GTFS `stops.txt` `stop_lon`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt).
* `:parent_station` - `id` of the `Model.Stop.t` representing the station this stop is inside or outside. `nil` if
this is a station or a stop not associated with a station.
* `:wheelchair_boarding` - See [GTFS `stops.txt` `wheelchair_boarding`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt).
* `:location_type` - See [GTFS `stops.txt` `location_type`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stopstxt).
"""
@type t :: %__MODULE__{
id: id,
name: String.t(),
description: String.t() | nil,
address: String.t() | nil,
platform_code: String.t() | nil,
platform_name: String.t() | nil,
latitude: WGS84.latitude(),
longitude: WGS84.longitude(),
parent_station: id | nil,
wheelchair_boarding: wheelchair_boarding,
location_type: location_type,
zone_id: String.t() | nil,
municipality: String.t() | nil,
on_street: String.t() | nil,
at_street: String.t() | nil,
vehicle_type: Model.Route.route_type() | nil
}
@doc """
Returns a boolean indicating whether the stop has a location.
## Examples
iex> located?(%Stop{latitude: 1, longitude: -2})
true
iex> located?(%Stop{})
false
"""
def located?(%__MODULE__{} = stop) do
case stop do
%{latitude: lat, longitude: lon} when is_number(lat) and is_number(lon) ->
true
_ ->
false
end
end
end
|
apps/model/lib/model/stop.ex
| 0.91715
| 0.74468
|
stop.ex
|
starcoder
|
defmodule Typo.Utils.PageSize do
@moduledoc false
import Typo.Utils.Guards
@page_sizes %{
# a-series
"a0" => {2380, 3368},
"a1" => {1684, 2380},
"a2" => {1190, 1684},
"a3" => {842, 1190},
"a4" => {595, 842},
"a5" => {421, 595},
"a6" => {297, 421},
"a7" => {210, 297},
"a8" => {148, 210},
"a9" => {105, 148},
# b-series
"b0" => {2836, 4008},
"b1" => {2004, 2836},
"b2" => {1418, 2004},
"b3" => {1002, 1418},
"b4" => {709, 1002},
"b5" => {501, 709},
"b6" => {355, 501},
"b7" => {250, 355},
"b8" => {178, 250},
"b9" => {125, 178},
"b10" => {89, 125},
# c-series
"c2" => {1837, 578},
"c3" => {578, 919},
"c4" => {919, 649},
"c5" => {649, 459},
"c6" => {459, 323},
# d-series
"d0" => {3090, 2186},
# ra-series
"ra0" => {3458, 2438},
"ra1" => {2438, 1729},
"ra2" => {1729, 1219},
# sra-series
"sra0" => {3628, 2551},
"sra1" => {2551, 1814},
"sra2" => {1814, 1276},
"sra3" => {1276, 907},
"sra4" => {907, 638},
# US
"ansi c" => {1584, 1224},
"ansi d" => {2448, 1584},
"ansi e" => {3168, 2448},
"letter" => {612, 792},
"legal" => {612, 1008},
# UK paper
"demy" => {1116, 1440},
"foolscap" => {954, 1188},
"imperial" => {1584, 2160},
"large post" => {1188, 1512},
"medium" => {1296, 1656},
"royal" => {1440, 1800},
"sheet half" => {954, 1782},
"sheet third" => {954, 1584},
"small medium" => {1260, 1584},
"small post" => {1044, 1332},
"small royal" => {1368, 1728},
# UK book
"metric crown quarto" => {536, 697},
"metric crown octavo" => {349, 527},
"metric demy quarto" => {621, 782},
"metric demy octavo" => {391, 612},
"metric large crown quarto" => {570, 731},
"metric large crown octavo" => {366, 561},
"metric royal quarto" => {672, 884},
"metric royal octavo" => {366, 561},
# Newspaper
"berliner" => {893, 1332},
"broadsheet" => {1692, 2124},
"rhenish" => {1006, 1474},
"tabloid" => {792, 1224},
# others
"business card" => {153, 243},
"c5e" => {462, 649},
"comm10e" => {298, 683},
"credit card" => {153, 243},
"dle" => {312, 624},
"executive" => {542, 720},
"folio" => {595, 935},
"ledger" => {792, 1224}
}
@supported_sizes Enum.sort(Map.keys(@page_sizes))
@doc """
Returns a list of supported paper size atoms.
"""
@spec get_supported_sizes :: [String.t(), ...]
def get_supported_sizes, do: @supported_sizes
@doc """
Gets dimensions for given `size` / `orientation` combination.
`size` can be either an atom or a binary string, and `orientation` can be either
`:portrait` or `:landscape`.
Run `supported_sizes/0` to get the list of available paper size atoms.
Returns `{:ok, {width, height}}` if successful, or `{:error, :invalid_page_size}`
otherwise.
"""
@spec page_size(atom() | String.t(), Typo.page_orientation()) ::
{:ok, {non_neg_integer(), non_neg_integer()}} | Typo.error()
def page_size(size, orientation \\ :portrait)
def page_size(size, orientation)
when is_binary(size) and is_page_orientation(orientation) do
cleaned = size |> String.downcase() |> String.replace("_", " ")
case Map.get(@page_sizes, cleaned) do
{w, h} ->
case orientation do
:portrait -> {:ok, {w, h}}
:landscape -> {:ok, {h, w}}
end
nil ->
{:error, :invalid_page_size}
end
end
def page_size(size, orientation) when is_atom(orientation) and is_page_orientation(orientation),
do: page_size(Atom.to_string(size), orientation)
end
|
lib/typo/utils/page_size.ex
| 0.708616
| 0.408277
|
page_size.ex
|
starcoder
|
defmodule Elixpath do
# Import some example from README.md to run doctests.
# Make sure to touch (i.e. update timestamp of) this file
# when editing examples in README.md.
readme = File.read!(__DIR__ |> Path.expand() |> Path.dirname() |> Path.join("README.md"))
[examples] = Regex.run(~r/##\s*Examples.+/s, readme)
@moduledoc """
Extract data from nested Elixir data structure using JSONPath-like path expressions.
See [this page](readme.html) for syntax.
""" <> examples
require Elixpath.PathComponent, as: PathComponent
require Elixpath.Tag, as: Tag
@typedoc """
Elixpath, already compiled by `Elixpath.Parser.parse/2` or `sigil_p/2`.
"""
@type t :: %__MODULE__{path: [PathComponent.t()]}
defstruct path: []
@doc """
Compiles string to internal Elixpath representation.
Warning: Do not specify `unsafe_atom` modifier (`u`) for untrusted input.
See `String.to_atom/1`, which this function uses to create new atom, for details.
## Modifiers
* `unsafe_atom` (u) - passes `unsafe_atom: true` option to `Elixpath.Parser.parse/2`.
* `atom_keys_preferred` (a) - passes `prefer_keys: :atom` option to `Elixpath.Parser.parse/2`.
## Examples
iex> import Elixpath, only: [sigil_p: 2]
iex> ~p/.string..:b[1]/
#Elixpath<[elixpath_child: "string", elixpath_descendant: :b, elixpath_child: 1]>
iex> ~p/.atom..:b[1]/a
#Elixpath<[elixpath_child: :atom, elixpath_descendant: :b, elixpath_child: 1]>
"""
defmacro sigil_p({:<<>>, _meta, [str]}, modifiers) do
opts = [
unsafe_atom: ?u in modifiers,
prefer_keys: if(?a in modifiers, do: :atom, else: :string)
]
path = Elixpath.Parser.parse!(str, opts) |> Macro.escape()
quote do
unquote(path)
end
end
@doc """
Query data from nested data structure.
Returns list of matches, wrapped by `:ok`.
When no match, `{:ok, []}` is returned.
## Options
For path parsing options, see `Elixpath.Parser.parse/2`.
"""
@spec query(data :: term, t | String.t(), [Elixpath.Parser.option()]) ::
{:ok, [term]} | {:error, reason :: term}
def query(data, path_or_str, opts \\ [])
def query(data, str, opts) when is_binary(str) do
with {:ok, compiled_path} <- Elixpath.Parser.parse(str, opts) do
query(data, compiled_path, opts)
end
end
def query(data, %Elixpath{} = path, opts) do
do_query(data, path, _gots = [], opts)
end
@spec do_query(term, t, list, Keyword.t()) :: {:ok, [term]} | {:error, reason :: term}
defp do_query(_data, %Elixpath{path: []}, _gots, _opts), do: {:ok, []}
defp do_query(data, %Elixpath{path: [PathComponent.child(key)]}, _gots, opts) do
Elixpath.Access.query(data, key, opts)
end
defp do_query(data, %Elixpath{path: [PathComponent.child(key) | rest]} = path, _gots, opts) do
with {:ok, children} <- Elixpath.Access.query(data, key, opts) do
Enum.reduce_while(children, {:ok, []}, fn child, {:ok, gots_acc} ->
case do_query(child, %{path | path: rest}, gots_acc, opts) do
{:ok, fetched} -> {:cont, {:ok, gots_acc ++ fetched}}
error -> {:halt, error}
end
end)
end
end
defp do_query(data, %Elixpath{path: [PathComponent.descendant(key) | rest]} = path, gots, opts) do
with direct_path <- %{path | path: [PathComponent.child(key) | rest]},
{:ok, direct_children} <- do_query(data, direct_path, gots, opts),
indirect_path <- %{
path
| path: [
PathComponent.child(Tag.wildcard()),
PathComponent.descendant(key) | rest
]
},
{:ok, indirect_children} <- do_query(data, indirect_path, gots, opts) do
{:ok, direct_children ++ indirect_children}
end
end
@doc """
Query data from nested data structure.
Same as `query/3`, except that `query!/3`raises on error.
Returns `[]` when no match.
"""
@spec query!(data :: term, t | String.t(), [Elixpath.Parser.option()]) :: [term] | no_return
def query!(data, path, opts \\ []) do
case query(data, Elixpath.Parser.parse!(path, opts), opts) do
{:ok, got} -> got
end
end
@doc """
Get *single* data from nested data structure.
Returns `default` when no match.
Raises on error.
"""
@spec get!(data :: term, t | String.t(), default, [Elixpath.Parser.option()]) ::
term | default | no_return
when default: term
def get!(data, path_or_str, default \\ nil, opts \\ [])
def get!(data, str, default, opts) when is_binary(str) do
get!(data, Elixpath.Parser.parse!(str, opts), default, opts)
end
def get!(data, %Elixpath{} = path, default, opts) do
case query!(data, path, opts) do
[] -> default
[head | _rest] -> head
end
end
@doc ~S"""
Converts Elixpath to string.
Also available via `Kernel.to_string/1`.
This function is named `stringify/1` to avoid name collision
with `Kernel.to_string/1` when the entire module is imported.
## Examples
iex> import Elixpath, only: [sigil_p: 2]
iex> path = ~p/.1.child..:decendant/u
#Elixpath<[elixpath_child: 1, elixpath_child: "child", elixpath_descendant: :decendant]>
iex> path |> to_string()
"[1].\"child\"..:decendant"
iex> "interpolation: #{~p/..1[*]..*/}"
"interpolation: ..[1].*..*"
"""
@spec stringify(t) :: String.t()
def stringify(path) do
Enum.map_join(path.path, fn
PathComponent.child(Tag.wildcard()) -> ".*"
PathComponent.descendant(Tag.wildcard()) -> "..*"
PathComponent.child(int) when is_integer(int) -> "[#{inspect(int)}]"
PathComponent.descendant(int) when is_integer(int) -> "..[#{inspect(int)}]"
PathComponent.child(x) -> ".#{inspect(x)}"
PathComponent.descendant(x) -> "..#{inspect(x)}"
end)
end
end
defimpl Inspect, for: Elixpath do
@spec inspect(Elixpath.t(), Inspect.Opts.t()) :: Inspect.Algebra.t()
def inspect(path, opts) do
Inspect.Algebra.concat(["#Elixpath<", Inspect.Algebra.to_doc(path.path, opts), ">"])
end
end
defimpl String.Chars, for: Elixpath do
@spec to_string(Elixpath.t()) :: binary
def to_string(path), do: Elixpath.stringify(path)
end
defimpl List.Chars, for: Elixpath do
@spec to_charlist(Elixpath.t()) :: charlist
def to_charlist(path), do: Elixpath.stringify(path) |> String.to_charlist()
end
|
lib/elixpath.ex
| 0.813794
| 0.432663
|
elixpath.ex
|
starcoder
|
defmodule DBConnection.Proxy do
@moduledoc """
A behaviour module for implementing a proxy module during the check out of a
connection.
`DBConnection.Proxy` callback modules can wrap a `DBConnection` callback
module and state while it is outside the pool.
"""
@doc """
Setup the initial state of the proxy. Return `{:ok, state}` to continue,
`:ignore` not to use the proxy or `{:error, exception}` to raise an exception.
This callback is called before checking out a connection from the pool.
"""
@callback init(Keyword.t) ::
{:ok, state :: any} | :ignore | {:error, Exception.t}
@doc """
Checks out the connection state to the proxy module. Return
`{:ok, conn, state}` to allow the checkout and continue,
`{:error, exception, conn, state}` to disallow the checkout and to raise an
exception or `{:disconnect, exception, conn, state}` to disconnect the
connection and raise an exception.
This callback is called after the connections `checkout/1` callback and should
setup the connection state for use by the proxy module.
"""
@callback checkout(module, Keyword.t, conn :: any, state :: any) ::
{:ok, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Checks in the connection state so it can be checked into the pool. Return
`{:ok, conn}` to allow the checkin and continue,
`{:error, exception, conn, state}` to allow the checkin but raise an
exception or `{:disconnect, exception, conn, state}` to disconnect the
connection and raise an exception.
This callback is called before the connections `checkin/1` and should undo
any changes made to the connection in `checkout/3`.
"""
@callback checkin(module, Keyword.t, conn :: any, state :: any) ::
{:ok, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Handle the beginning of a transaction. Return `{:ok, conn, state}` to
continue, `{:error, exception, conn, state}` to abort the transaction and
continue or `{:disconnect, exception, conn, state}` to abort the transaction
and disconnect the connection.
"""
@callback handle_begin(module, opts :: Keyword.t, conn :: any,
state :: any) ::
{:ok, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Handle commiting a transaction. Return `{:ok, conn, state}` on success and
to continue, `{:error, exception, conn, state}` to abort the transaction and
continue or `{:disconnect, exception, conn, state}` to abort the transaction
and disconnect the connection.
"""
@callback handle_commit(module, opts :: Keyword.t, conn :: any,
state :: any) ::
{:ok, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Handle rolling back a transaction. Return `{:ok, conn, state}` on success
and to continue, `{:error, exception, conn, state}` to abort the transaction
and continue or `{:disconnect, exception, conn, state}` to abort the
transaction and disconnect.
"""
@callback handle_rollback(module, opts :: Keyword.t, conn :: any,
state :: any) ::
{:ok, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Prepare a query with the database. Return `{:ok, query, conn, state}` where
`query` is a query to pass to `execute/4` or `close/3`,
`{:error, exception, conn, state}` to return an error and continue or
`{:disconnect, exception, conn, state}` to return an error and disconnect the
connection.
"""
@callback handle_prepare(module, DBConnection.query, opts :: Keyword.t,
conn :: any, state :: any) ::
{:ok, DBConnection.query, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Execute a query. Return `{:ok, result, conn, state}` to return the result
`result` and continue, `{:prepare, conn, state}` to retry execute after
preparing the query, `{:error, exception, conn, state}` to return an error and
continue or `{:disconnect, exception, conn, state}` to return an error and
disconnect the connection.
"""
@callback handle_execute(module, DBConneciton.query, DBConnection.params,
opts :: Keyword.t, conn :: any, state :: any) ::
{:ok, DBConnection.result, new_conn :: any, new_state :: any} |
{:prepare, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Execute a query and close it. See `handle_execute/6`.
"""
@callback handle_execute_close(module, DBConneciton.query,
DBConnection.params, opts :: Keyword.t, conn :: any, state :: any) ::
{:ok, DBConnection.result, new_conn :: any, new_state :: any} |
{:prepare, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Close a query. Return `{:ok, conn, state}` on success and to continue,
`{:error, exception, conn, state}` to return an error and continue, or
`{:disconnect, exception, conn, state}` to return an error and disconnect.
"""
@callback handle_close(module, DBConnection.query, opts :: Keyword.t,
conn :: any, state :: any) ::
{:ok, new_conn :: any, new_state :: any} |
{:error | :disconnect, Exception.t, new_conn :: any, new_state :: any}
@doc """
Terminate the proxy. Should cleanup any side effects as process may not exit.
This callback is called after checking in a connection to the pool.
"""
@callback terminate(:normal | {:disconnect, Exception.t} | {:stop, any},
opts :: Keyword.t, state :: any) :: any
@doc """
Use `DBConnection.Proxy` to set the behaviour and include default
implementations. The default implementation of `init/1` stores
the checkout options as the proxy's state. `checkout/4`, `checkin/4` and
`terminate/3` act as no-ops. The remaining callbacks call the internal
connection module with the given arguments and state.
"""
defmacro __using__(_) do
quote location: :keep do
@behaviour DBConnection.Proxy
def init(opts), do: {:ok, opts}
def checkout(_, opts, conn, state), do: {:ok, conn, state}
def checkin(_, _, conn, _), do: {:ok, conn}
def handle_begin(mod, opts, conn, state) do
case apply(mod, :handle_begin, [opts, conn]) do
{:ok, _} = ok ->
:erlang.append_element(ok, state)
{tag, _, _} = error when tag in [:error, :disconnect] ->
:erlang.append_element(error, state)
other ->
raise DBConnection.Error, "bad return value: #{inspect other}"
end
end
def handle_commit(mod, opts, conn, state) do
case apply(mod, :handle_commit, [opts, conn]) do
{:ok, _} = ok ->
:erlang.append_element(ok, state)
{tag, _, _} = error when tag in [:error, :disconnect] ->
:erlang.append_element(error, state)
other ->
raise DBConnection.Error, "bad return value: #{inspect other}"
end
end
def handle_rollback(mod, opts, conn, state) do
case apply(mod, :handle_rollback, [opts, conn]) do
{:ok, _} = ok ->
:erlang.append_element(ok, state)
{tag, _, _} = error when tag in [:error, :disconnect] ->
:erlang.append_element(error, state)
other ->
raise DBConnection.Error, "bad return value: #{inspect other}"
end
end
def handle_prepare(mod, query, opts, conn, state) do
case apply(mod, :handle_prepare, [query, opts, conn]) do
{:ok, _, _} = ok ->
:erlang.append_element(ok, state)
{tag, _, _} = error when tag in [:error, :disconnect] ->
:erlang.append_element(error, state)
other ->
raise DBConnection.Error, "bad return value: #{inspect other}"
end
end
def handle_execute(mod, query, params, opts, conn, state) do
case apply(mod, :handle_execute, [query, params, opts, conn]) do
{:ok, _, _} = ok ->
:erlang.append_element(ok, state)
{:prepare, _} = prepare ->
:erlang.append_element(prepare, state)
{tag, _, _} = error when tag in [:error, :disconnect] ->
:erlang.append_element(error, state)
other ->
raise DBConnection.Error, "bad return value: #{inspect other}"
end
end
def handle_execute_close(mod, query, params, opts, conn, state) do
case apply(mod, :handle_execute_close, [query, params, opts, conn]) do
{:ok, _, _} = ok ->
:erlang.append_element(ok, state)
{:prepaere, _} = prepare ->
:erlang.append_element(prepare, state)
{tag, _, _} = error when tag in [:error, :disconnect] ->
:erlang.append_element(error, state)
other ->
raise DBConnection.Error, "bad return value: #{inspect other}"
end
end
def handle_close(mod, query, opts, conn, state) do
case apply(mod, :handle_close, [query, opts, conn]) do
{:ok, _} = ok ->
:erlang.append_element(ok, state)
{tag, _, _} = error when tag in [:error, :disconnect] ->
:erlang.append_element(error, state)
other ->
raise DBConnection.Error, "bad return value: #{inspect other}"
end
end
def terminate(_, _, _), do: :ok
defoverridable [init: 1, checkout: 4, checkin: 4, handle_begin: 4,
handle_commit: 4, handle_rollback: 4, handle_prepare: 5,
handle_execute: 6, handle_execute_close: 6,
handle_close: 5, terminate: 3]
end
end
end
|
deps/db_connection/lib/db_connection/proxy.ex
| 0.893864
| 0.608769
|
proxy.ex
|
starcoder
|
defmodule Segment.Analytics.Batch do
@derive [Poison.Encoder]
defstruct [
:batch,
:sentAt
]
end
defmodule Segment.Analytics.Track do
@derive [Poison.Encoder]
@method "track"
defstruct [
:anonymousId,
:context,
:event,
:messageId,
:properties,
:timestamp,
:userId,
:version,
type: @method
]
end
defmodule Segment.Analytics.Identify do
@derive [Poison.Encoder]
@method "identify"
defstruct [
:anonymousId,
:context,
:messageId,
:timestamp,
:traits,
:userId,
:version,
type: @method
]
end
defmodule Segment.Analytics.Alias do
@derive [Poison.Encoder]
@method "alias"
defstruct [:context, :previousId, :timestamp, :userId, :version, type: @method]
end
defmodule Segment.Analytics.Page do
@derive [Poison.Encoder]
@method "page"
defstruct [
:anonymousId,
:context,
:messageId,
:name,
:properties,
:timestamp,
:userId,
:version,
type: @method
]
end
defmodule Segment.Analytics.Screen do
@derive [Poison.Encoder]
@method "screen"
defstruct [
:anonymousId,
:context,
:messageId,
:name,
:properties,
:timestamp,
:userId,
:version,
type: @method
]
end
defmodule Segment.Analytics.Group do
@derive [Poison.Encoder]
@method "group"
defstruct [
:anonymousId,
:context,
:groupId,
:messageId,
:timestamp,
:traits,
:userId,
:version,
type: @method
]
end
defmodule Segment.Analytics.Context.Library do
@derive [Poison.Encoder]
@project_name Mix.Project.get().project[:name]
@project_version Mix.Project.get().project[:version]
defstruct [:name, :version, :transport]
def build() do
%__MODULE__{
name: @project_name,
version: @project_version,
# the only supported by the library for now.
transport: "http"
}
end
end
defmodule Segment.Analytics.Context do
@derive [Poison.Encoder]
defstruct [
:app,
:ip,
:library,
:location,
:os,
:page,
:referrer,
:screen,
:timezone,
:traits,
:userAgent
]
end
|
lib/segment/analytics/model.ex
| 0.679498
| 0.491029
|
model.ex
|
starcoder
|
defmodule Gringotts.Adapter do
@moduledoc """
Validates the "required" configuration.
All gateway modules must `use` this module, which provides a run-time
configuration validator.
Gringotts picks up the merchant's Gateway authentication secrets from the
Application config. The configuration validator can be customized by providing
a list of `required_config` keys. The validator will check if these keys are
available at run-time, before each call to the Gateway.
## Example
Say a merchant must provide his `secret_user_name` and `secret_password` to
some Gateway `XYZ`. Then, `Gringotts` expects that the `GatewayXYZ` module
would use `Adapter` in the following manner:
```
defmodule Gringotts.Gateways.GatewayXYZ do
use Gringotts.Adapter, required_config: [:secret_user_name, :secret_password]
use Gringotts.Gateways.Base
# the rest of the implentation
end
```
And, the merchant woud provide these secrets in the Application config,
possibly via `config/config.exs` like so,
```
# config/config.exs
config :gringotts, Gringotts.Gateways.GatewayXYZ,
adapter: Gringotts.Gateways.GatewayXYZ,
secret_user_name: "some_really_secret_user_name",
secret_password: "<PASSWORD>"
```
"""
defmacro __using__(opts) do
quote bind_quoted: [opts: opts] do
@required_config opts[:required_config] || []
@doc """
Catches gateway configuration errors.
Raises a run-time `ArgumentError` if any of the `required_config` values
is not available or missing from the Application config.
"""
def validate_config(config) when is_list(config) do
missing_keys =
Enum.reduce(@required_config, [], fn key, missing_keys ->
if config[key] in [nil, ""], do: [key | missing_keys], else: missing_keys
end)
raise_on_missing_config(missing_keys, config)
end
def validate_config(config) when is_map(config) do
config
|> Enum.into([])
|> validate_config
end
defp raise_on_missing_config([], _config), do: :ok
defp raise_on_missing_config(key, config) do
raise ArgumentError, """
expected #{inspect(key)} to be set, got: #{inspect(config)}
"""
end
end
end
end
|
lib/gringotts/adapter.ex
| 0.89093
| 0.774626
|
adapter.ex
|
starcoder
|
defmodule AWS.FraudDetector do
@moduledoc """
This is the Amazon Fraud Detector API Reference.
This guide is for developers who need detailed information about Amazon Fraud
Detector API actions, data types, and errors. For more information about Amazon
Fraud Detector features, see the [Amazon Fraud Detector User Guide](https://docs.aws.amazon.com/frauddetector/latest/ug/).
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: nil,
api_version: "2019-11-15",
content_type: "application/x-amz-json-1.1",
credential_scope: nil,
endpoint_prefix: "frauddetector",
global?: false,
protocol: "json",
service_id: "FraudDetector",
signature_version: "v4",
signing_name: "frauddetector",
target_prefix: "AWSHawksNestServiceFacade"
}
end
@doc """
Creates a batch of variables.
"""
def batch_create_variable(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchCreateVariable", input, options)
end
@doc """
Gets a batch of variables.
"""
def batch_get_variable(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "BatchGetVariable", input, options)
end
@doc """
Cancels an in-progress batch import job.
"""
def cancel_batch_import_job(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CancelBatchImportJob", input, options)
end
@doc """
Cancels the specified batch prediction job.
"""
def cancel_batch_prediction_job(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CancelBatchPredictionJob", input, options)
end
@doc """
Creates a batch import job.
"""
def create_batch_import_job(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateBatchImportJob", input, options)
end
@doc """
Creates a batch prediction job.
"""
def create_batch_prediction_job(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateBatchPredictionJob", input, options)
end
@doc """
Creates a detector version.
The detector version starts in a `DRAFT` status.
"""
def create_detector_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateDetectorVersion", input, options)
end
@doc """
Creates a model using the specified model type.
"""
def create_model(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateModel", input, options)
end
@doc """
Creates a version of the model using the specified model type and model id.
"""
def create_model_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateModelVersion", input, options)
end
@doc """
Creates a rule for use with the specified detector.
"""
def create_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateRule", input, options)
end
@doc """
Creates a variable.
"""
def create_variable(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateVariable", input, options)
end
@doc """
Deletes data that was batch imported to Amazon Fraud Detector.
"""
def delete_batch_import_job(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteBatchImportJob", input, options)
end
@doc """
Deletes a batch prediction job.
"""
def delete_batch_prediction_job(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteBatchPredictionJob", input, options)
end
@doc """
Deletes the detector.
Before deleting a detector, you must first delete all detector versions and rule
versions associated with the detector.
When you delete a detector, Amazon Fraud Detector permanently deletes the
detector and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_detector(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteDetector", input, options)
end
@doc """
Deletes the detector version.
You cannot delete detector versions that are in `ACTIVE` status.
When you delete a detector version, Amazon Fraud Detector permanently deletes
the detector and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_detector_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteDetectorVersion", input, options)
end
@doc """
Deletes an entity type.
You cannot delete an entity type that is included in an event type.
When you delete an entity type, Amazon Fraud Detector permanently deletes that
entity type and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_entity_type(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteEntityType", input, options)
end
@doc """
Deletes the specified event.
When you delete an event, Amazon Fraud Detector permanently deletes that event
and the event data is no longer stored in Amazon Fraud Detector.
"""
def delete_event(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteEvent", input, options)
end
@doc """
Deletes an event type.
You cannot delete an event type that is used in a detector or a model.
When you delete an event type, Amazon Fraud Detector permanently deletes that
event type and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_event_type(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteEventType", input, options)
end
@doc """
Deletes all events of a particular event type.
"""
def delete_events_by_event_type(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteEventsByEventType", input, options)
end
@doc """
Removes a SageMaker model from Amazon Fraud Detector.
You can remove an Amazon SageMaker model if it is not associated with a detector
version. Removing a SageMaker model disconnects it from Amazon Fraud Detector,
but the model remains available in SageMaker.
"""
def delete_external_model(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteExternalModel", input, options)
end
@doc """
Deletes a label.
You cannot delete labels that are included in an event type in Amazon Fraud
Detector.
You cannot delete a label assigned to an event ID. You must first delete the
relevant event ID.
When you delete a label, Amazon Fraud Detector permanently deletes that label
and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_label(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteLabel", input, options)
end
@doc """
Deletes a model.
You can delete models and model versions in Amazon Fraud Detector, provided that
they are not associated with a detector version.
When you delete a model, Amazon Fraud Detector permanently deletes that model
and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_model(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteModel", input, options)
end
@doc """
Deletes a model version.
You can delete models and model versions in Amazon Fraud Detector, provided that
they are not associated with a detector version.
When you delete a model version, Amazon Fraud Detector permanently deletes that
model version and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_model_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteModelVersion", input, options)
end
@doc """
Deletes an outcome.
You cannot delete an outcome that is used in a rule version.
When you delete an outcome, Amazon Fraud Detector permanently deletes that
outcome and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_outcome(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteOutcome", input, options)
end
@doc """
Deletes the rule.
You cannot delete a rule if it is used by an `ACTIVE` or `INACTIVE` detector
version.
When you delete a rule, Amazon Fraud Detector permanently deletes that rule and
the data is no longer stored in Amazon Fraud Detector.
"""
def delete_rule(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteRule", input, options)
end
@doc """
Deletes a variable.
You can't delete variables that are included in an event type in Amazon Fraud
Detector.
Amazon Fraud Detector automatically deletes model output variables and SageMaker
model output variables when you delete the model. You can't delete these
variables manually.
When you delete a variable, Amazon Fraud Detector permanently deletes that
variable and the data is no longer stored in Amazon Fraud Detector.
"""
def delete_variable(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteVariable", input, options)
end
@doc """
Gets all versions for a specified detector.
"""
def describe_detector(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeDetector", input, options)
end
@doc """
Gets all of the model versions for the specified model type or for the specified
model type and model ID.
You can also get details for a single, specified model version.
"""
def describe_model_versions(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeModelVersions", input, options)
end
@doc """
Gets all batch import jobs or a specific job of the specified ID.
This is a paginated API. If you provide a null `maxResults`, this action
retrieves a maximum of 50 records per page. If you provide a `maxResults`, the
value must be between 1 and 50. To get the next page results, provide the
pagination token from the `GetBatchImportJobsResponse` as part of your request.
A null pagination token fetches the records from the beginning.
"""
def get_batch_import_jobs(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetBatchImportJobs", input, options)
end
@doc """
Gets all batch prediction jobs or a specific job if you specify a job ID.
This is a paginated API. If you provide a null maxResults, this action retrieves
a maximum of 50 records per page. If you provide a maxResults, the value must be
between 1 and 50. To get the next page results, provide the pagination token
from the GetBatchPredictionJobsResponse as part of your request. A null
pagination token fetches the records from the beginning.
"""
def get_batch_prediction_jobs(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetBatchPredictionJobs", input, options)
end
@doc """
Retrieves the status of a `DeleteEventsByEventType` action.
"""
def get_delete_events_by_event_type_status(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetDeleteEventsByEventTypeStatus", input, options)
end
@doc """
Gets a particular detector version.
"""
def get_detector_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetDetectorVersion", input, options)
end
@doc """
Gets all detectors or a single detector if a `detectorId` is specified.
This is a paginated API. If you provide a null `maxResults`, this action
retrieves a maximum of 10 records per page. If you provide a `maxResults`, the
value must be between 5 and 10. To get the next page results, provide the
pagination token from the `GetDetectorsResponse` as part of your request. A null
pagination token fetches the records from the beginning.
"""
def get_detectors(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetDetectors", input, options)
end
@doc """
Gets all entity types or a specific entity type if a name is specified.
This is a paginated API. If you provide a null `maxResults`, this action
retrieves a maximum of 10 records per page. If you provide a `maxResults`, the
value must be between 5 and 10. To get the next page results, provide the
pagination token from the `GetEntityTypesResponse` as part of your request. A
null pagination token fetches the records from the beginning.
"""
def get_entity_types(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetEntityTypes", input, options)
end
@doc """
Retrieves details of events stored with Amazon Fraud Detector.
This action does not retrieve prediction results.
"""
def get_event(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetEvent", input, options)
end
@doc """
Evaluates an event against a detector version.
If a version ID is not provided, the detector’s (`ACTIVE`) version is used.
"""
def get_event_prediction(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetEventPrediction", input, options)
end
@doc """
Gets details of the past fraud predictions for the specified event ID, event
type, detector ID, and detector version ID that was generated in the specified
time period.
"""
def get_event_prediction_metadata(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetEventPredictionMetadata", input, options)
end
@doc """
Gets all event types or a specific event type if name is provided.
This is a paginated API. If you provide a null `maxResults`, this action
retrieves a maximum of 10 records per page. If you provide a `maxResults`, the
value must be between 5 and 10. To get the next page results, provide the
pagination token from the `GetEventTypesResponse` as part of your request. A
null pagination token fetches the records from the beginning.
"""
def get_event_types(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetEventTypes", input, options)
end
@doc """
Gets the details for one or more Amazon SageMaker models that have been imported
into the service.
This is a paginated API. If you provide a null `maxResults`, this actions
retrieves a maximum of 10 records per page. If you provide a `maxResults`, the
value must be between 5 and 10. To get the next page results, provide the
pagination token from the `GetExternalModelsResult` as part of your request. A
null pagination token fetches the records from the beginning.
"""
def get_external_models(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetExternalModels", input, options)
end
@doc """
Gets the encryption key if a KMS key has been specified to be used to encrypt
content in Amazon Fraud Detector.
"""
def get_kms_encryption_key(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetKMSEncryptionKey", input, options)
end
@doc """
Gets all labels or a specific label if name is provided.
This is a paginated API. If you provide a null `maxResults`, this action
retrieves a maximum of 50 records per page. If you provide a `maxResults`, the
value must be between 10 and 50. To get the next page results, provide the
pagination token from the `GetGetLabelsResponse` as part of your request. A null
pagination token fetches the records from the beginning.
"""
def get_labels(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetLabels", input, options)
end
@doc """
Gets the details of the specified model version.
"""
def get_model_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetModelVersion", input, options)
end
@doc """
Gets one or more models.
Gets all models for the Amazon Web Services account if no model type and no
model id provided. Gets all models for the Amazon Web Services account and model
type, if the model type is specified but model id is not provided. Gets a
specific model if (model type, model id) tuple is specified.
This is a paginated API. If you provide a null `maxResults`, this action
retrieves a maximum of 10 records per page. If you provide a `maxResults`, the
value must be between 1 and 10. To get the next page results, provide the
pagination token from the response as part of your request. A null pagination
token fetches the records from the beginning.
"""
def get_models(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetModels", input, options)
end
@doc """
Gets one or more outcomes.
This is a paginated API. If you provide a null `maxResults`, this actions
retrieves a maximum of 100 records per page. If you provide a `maxResults`, the
value must be between 50 and 100. To get the next page results, provide the
pagination token from the `GetOutcomesResult` as part of your request. A null
pagination token fetches the records from the beginning.
"""
def get_outcomes(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetOutcomes", input, options)
end
@doc """
Get all rules for a detector (paginated) if `ruleId` and `ruleVersion` are not
specified.
Gets all rules for the detector and the `ruleId` if present (paginated). Gets a
specific rule if both the `ruleId` and the `ruleVersion` are specified.
This is a paginated API. Providing null maxResults results in retrieving maximum
of 100 records per page. If you provide maxResults the value must be between 50
and 100. To get the next page result, a provide a pagination token from
GetRulesResult as part of your request. Null pagination token fetches the
records from the beginning.
"""
def get_rules(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetRules", input, options)
end
@doc """
Gets all of the variables or the specific variable.
This is a paginated API. Providing null `maxSizePerPage` results in retrieving
maximum of 100 records per page. If you provide `maxSizePerPage` the value must
be between 50 and 100. To get the next page result, a provide a pagination token
from `GetVariablesResult` as part of your request. Null pagination token fetches
the records from the beginning.
"""
def get_variables(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetVariables", input, options)
end
@doc """
Gets a list of past predictions.
The list can be filtered by detector ID, detector version ID, event ID, event
type, or by specifying a time period. If filter is not specified, the most
recent prediction is returned.
For example, the following filter lists all past predictions for `xyz` event
type - `{ "eventType":{ "value": "xyz" }” } `
This is a paginated API. If you provide a null `maxResults`, this action will
retrieve a maximum of 10 records per page. If you provide a `maxResults`, the
value must be between 50 and 100. To get the next page results, provide the
`nextToken` from the response as part of your request. A null `nextToken`
fetches the records from the beginning.
"""
def list_event_predictions(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListEventPredictions", input, options)
end
@doc """
Lists all tags associated with the resource.
This is a paginated API. To get the next page results, provide the pagination
token from the response as part of your request. A null pagination token fetches
the records from the beginning.
"""
def list_tags_for_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTagsForResource", input, options)
end
@doc """
Creates or updates a detector.
"""
def put_detector(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutDetector", input, options)
end
@doc """
Creates or updates an entity type.
An entity represents who is performing the event. As part of a fraud prediction,
you pass the entity ID to indicate the specific entity who performed the event.
An entity type classifies the entity. Example classifications include customer,
merchant, or account.
"""
def put_entity_type(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutEntityType", input, options)
end
@doc """
Creates or updates an event type.
An event is a business activity that is evaluated for fraud risk. With Amazon
Fraud Detector, you generate fraud predictions for events. An event type defines
the structure for an event sent to Amazon Fraud Detector. This includes the
variables sent as part of the event, the entity performing the event (such as a
customer), and the labels that classify the event. Example event types include
online payment transactions, account registrations, and authentications.
"""
def put_event_type(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutEventType", input, options)
end
@doc """
Creates or updates an Amazon SageMaker model endpoint.
You can also use this action to update the configuration of the model endpoint,
including the IAM role and/or the mapped variables.
"""
def put_external_model(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutExternalModel", input, options)
end
@doc """
Specifies the KMS key to be used to encrypt content in Amazon Fraud Detector.
"""
def put_kms_encryption_key(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutKMSEncryptionKey", input, options)
end
@doc """
Creates or updates label.
A label classifies an event as fraudulent or legitimate. Labels are associated
with event types and used to train supervised machine learning models in Amazon
Fraud Detector.
"""
def put_label(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutLabel", input, options)
end
@doc """
Creates or updates an outcome.
"""
def put_outcome(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutOutcome", input, options)
end
@doc """
Stores events in Amazon Fraud Detector without generating fraud predictions for
those events.
For example, you can use `SendEvent` to upload a historical dataset, which you
can then later use to train a model.
"""
def send_event(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "SendEvent", input, options)
end
@doc """
Assigns tags to a resource.
"""
def tag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "TagResource", input, options)
end
@doc """
Removes tags from a resource.
"""
def untag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UntagResource", input, options)
end
@doc """
Updates a detector version.
The detector version attributes that you can update include models, external
model endpoints, rules, rule execution mode, and description. You can only
update a `DRAFT` detector version.
"""
def update_detector_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateDetectorVersion", input, options)
end
@doc """
Updates the detector version's description.
You can update the metadata for any detector version (`DRAFT, ACTIVE,` or
`INACTIVE`).
"""
def update_detector_version_metadata(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateDetectorVersionMetadata", input, options)
end
@doc """
Updates the detector version’s status.
You can perform the following promotions or demotions using
`UpdateDetectorVersionStatus`: `DRAFT` to `ACTIVE`, `ACTIVE` to `INACTIVE`, and
`INACTIVE` to `ACTIVE`.
"""
def update_detector_version_status(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateDetectorVersionStatus", input, options)
end
@doc """
Updates the specified event with a new label.
"""
def update_event_label(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateEventLabel", input, options)
end
@doc """
Updates model description.
"""
def update_model(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateModel", input, options)
end
@doc """
Updates a model version.
Updating a model version retrains an existing model version using updated
training data and produces a new minor version of the model. You can update the
training data set location and data access role attributes using this action.
This action creates and trains a new minor version of the model, for example
version 1.01, 1.02, 1.03.
"""
def update_model_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateModelVersion", input, options)
end
@doc """
Updates the status of a model version.
You can perform the following status updates:
1. Change the `TRAINING_COMPLETE` status to `ACTIVE`.
2. Change `ACTIVE` to `INACTIVE`.
"""
def update_model_version_status(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateModelVersionStatus", input, options)
end
@doc """
Updates a rule's metadata.
The description attribute can be updated.
"""
def update_rule_metadata(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateRuleMetadata", input, options)
end
@doc """
Updates a rule version resulting in a new rule version.
Updates a rule version resulting in a new rule version (version 1, 2, 3 ...).
"""
def update_rule_version(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateRuleVersion", input, options)
end
@doc """
Updates a variable.
"""
def update_variable(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateVariable", input, options)
end
end
|
lib/aws/generated/fraud_detector.ex
| 0.883041
| 0.481027
|
fraud_detector.ex
|
starcoder
|
defmodule GrovePi.PivotPi.PCA9685 do
alias GrovePi.Board
use Bitwise
@moduledoc """
This module provides lower level functions to interact with the
[PivotPi](https://www.dexterindustries.com/pivotpi-tutorials-documentation/)
through the [GrovePi](https://www.dexterindustries.com/grovepi/). Most users
should be able to obtain all needed functionality with `GrovePi.PivotPi`.
"""
# References
# https://github.com/DexterInd/PivotPi/tree/master/Software/Python
# https://www.nxp.com/docs/en/data-sheet/PCA9685.pdf
@type channel :: 0..15
# registers/etc:
@pca9685_address 0x40
@mode1 0x00
@mode2 0x01
@prescale 0xFE
@led0_on_l 0x06
# @led0_on_h 0x07
# @led0_off_l 0x08
# @led0_off_h 0x09
@all_led_on_l 0xFA
# @all_led_on_h 0xfb
# @all_led_off_l 0xfc
# @all_led_off_h 0xfd
# mode1 options:
# @mode1_allcall 0x01 # Unused
# @mode1_subadr1 0x02 # Unused
# @mode1_subadr2 0x03 # Unused
# @mode1_subadr3 0x04 # Unused
@mode1_sleep 0x10
@mode1_ai 0x20
# @mode1_extclk 0x40 # Unused
# @mode1_restart 0x80 # Unused
@mode1_default @mode1_ai
# mode2 options:
# Totem pole drive
@mode2_outdrv 0x04
# Inverted signal
@mode2_invrt 0x10
@mode2_default @mode2_outdrv ||| @mode2_invrt
@default_freq 60
@doc false
def initialize() do
set_pwm_off(:all)
set_modes()
set_pwm_freq(@default_freq)
end
defp set_modes() do
# Initialize the mode registers, but don't wake
# the PCA9685 up yet.
send_cmd(<<@mode1, @mode1_default ||| @mode1_sleep>>)
send_cmd(<<@mode2, @mode2_default>>)
end
defp set_pwm_freq(freq_hz) do
# The prescale register can only be set when the
# PCA9685 is in sleep mode.
send_cmd(<<@mode1, @mode1_default ||| @mode1_sleep>>)
send_cmd(<<@prescale, frequency_to_prescale(freq_hz)>>)
send_cmd(<<@mode1, @mode1_default>>)
# Wait 500 uS for oscillators to start
Process.sleep(1)
end
@doc """
Update the PWM on and off times on the specified channel
or `:all` to write to update all channels.
"""
@spec set_pwm(channel | :all, integer, integer) :: :ok | {:error, term}
def set_pwm(channel, on, off) do
send_cmd(<<channel_to_register(channel), on::little-size(16), off::little-size(16)>>)
end
@doc """
Turn the specified channel or `:all` ON.
"""
@spec set_pwm_on(channel | :all) :: :ok | {:error, term}
def set_pwm_on(channel) do
set_pwm(channel, 0x1000, 0)
end
@doc """
Turn the specified channel or `:all` OFF.
"""
@spec set_pwm_off(channel | :all) :: :ok | {:error, term}
def set_pwm_off(channel) do
set_pwm(channel, 0, 0x1000)
end
defp channel_to_register(channel) when is_integer(channel), do: @led0_on_l + 4 * channel
defp channel_to_register(:all), do: @all_led_on_l
defp frequency_to_prescale(hz), do: round(25_000_000.0 / 4096.0 / hz - 1.0)
@spec send_cmd(binary) :: :ok | {:error, term}
def send_cmd(command) do
Board.i2c_write_device(@pca9685_address, command)
end
end
|
lib/grovepi/pivotpi/PCA9685.ex
| 0.784236
| 0.519217
|
PCA9685.ex
|
starcoder
|
defmodule AWS.ApplicationAutoScaling do
@moduledoc """
With Application Auto Scaling, you can configure automatic scaling for the
following resources:
* Amazon AppStream 2.0 fleets
* Amazon Aurora Replicas
* Amazon Comprehend document classification and entity recognizer
endpoints
* Amazon DynamoDB tables and global secondary indexes throughput
capacity
* Amazon ECS services
* Amazon ElastiCache for Redis clusters (replication groups)
* Amazon EMR clusters
* Amazon Keyspaces (for Apache Cassandra) tables
* Lambda function provisioned concurrency
* Amazon Managed Streaming for Apache Kafka broker storage
* Amazon Neptune clusters
* Amazon SageMaker endpoint variants
* Spot Fleets (Amazon EC2)
* Custom resources provided by your own applications or services
## API Summary
The Application Auto Scaling service API includes three key sets of actions:
* Register and manage scalable targets - Register Amazon Web
Services or custom resources as scalable targets (a resource that Application
Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve
information on existing scalable targets.
* Configure and manage automatic scaling - Define scaling policies
to dynamically scale your resources in response to CloudWatch alarms, schedule
one-time or recurring scaling actions, and retrieve your recent scaling activity
history.
* Suspend and resume scaling - Temporarily suspend and later resume
automatic scaling by calling the
[RegisterScalableTarget](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_RegisterScalableTarget.html) API action for any Application Auto Scaling scalable target. You can suspend and
resume (individually or in combination) scale-out activities that are triggered
by a scaling policy, scale-in activities that are triggered by a scaling policy,
and scheduled scaling.
To learn more about Application Auto Scaling, including information about
granting IAM users required permissions for Application Auto Scaling actions,
see the [Application Auto Scaling User
Guide](https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html).
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: nil,
api_version: "2016-02-06",
content_type: "application/x-amz-json-1.1",
credential_scope: nil,
endpoint_prefix: "application-autoscaling",
global?: false,
protocol: "json",
service_id: "Application Auto Scaling",
signature_version: "v4",
signing_name: "application-autoscaling",
target_prefix: "AnyScaleFrontendService"
}
end
@doc """
Deletes the specified scaling policy for an Application Auto Scaling scalable
target.
Deleting a step scaling policy deletes the underlying alarm action, but does not
delete the CloudWatch alarm associated with the scaling policy, even if it no
longer has an associated action.
For more information, see [Delete a step scaling policy](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-step-scaling-policies.html#delete-step-scaling-policy)
and [Delete a target tracking scaling policy](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html#delete-target-tracking-policy)
in the *Application Auto Scaling User Guide*.
"""
def delete_scaling_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteScalingPolicy", input, options)
end
@doc """
Deletes the specified scheduled action for an Application Auto Scaling scalable
target.
For more information, see [Delete a scheduled action](https://docs.aws.amazon.com/autoscaling/application/userguide/scheduled-scaling-additional-cli-commands.html#delete-scheduled-action)
in the *Application Auto Scaling User Guide*.
"""
def delete_scheduled_action(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteScheduledAction", input, options)
end
@doc """
Deregisters an Application Auto Scaling scalable target when you have finished
using it.
To see which resources have been registered, use
[DescribeScalableTargets](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_DescribeScalableTargets.html).
Deregistering a scalable target deletes the scaling policies and the scheduled
actions that are associated with it.
"""
def deregister_scalable_target(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeregisterScalableTarget", input, options)
end
@doc """
Gets information about the scalable targets in the specified namespace.
You can filter the results using `ResourceIds` and `ScalableDimension`.
"""
def describe_scalable_targets(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeScalableTargets", input, options)
end
@doc """
Provides descriptive information about the scaling activities in the specified
namespace from the previous six weeks.
You can filter the results using `ResourceId` and `ScalableDimension`.
"""
def describe_scaling_activities(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeScalingActivities", input, options)
end
@doc """
Describes the Application Auto Scaling scaling policies for the specified
service namespace.
You can filter the results using `ResourceId`, `ScalableDimension`, and
`PolicyNames`.
For more information, see [Target tracking scaling policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html)
and [Step scaling policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-step-scaling-policies.html)
in the *Application Auto Scaling User Guide*.
"""
def describe_scaling_policies(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeScalingPolicies", input, options)
end
@doc """
Describes the Application Auto Scaling scheduled actions for the specified
service namespace.
You can filter the results using the `ResourceId`, `ScalableDimension`, and
`ScheduledActionNames` parameters.
For more information, see [Scheduled scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html)
and [Managing scheduled scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/scheduled-scaling-additional-cli-commands.html)
in the *Application Auto Scaling User Guide*.
"""
def describe_scheduled_actions(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DescribeScheduledActions", input, options)
end
@doc """
Creates or updates a scaling policy for an Application Auto Scaling scalable
target.
Each scalable target is identified by a service namespace, resource ID, and
scalable dimension. A scaling policy applies to the scalable target identified
by those three attributes. You cannot create a scaling policy until you have
registered the resource as a scalable target.
Multiple scaling policies can be in force at the same time for the same scalable
target. You can have one or more target tracking scaling policies, one or more
step scaling policies, or both. However, there is a chance that multiple
policies could conflict, instructing the scalable target to scale out or in at
the same time. Application Auto Scaling gives precedence to the policy that
provides the largest capacity for both scale out and scale in. For example, if
one policy increases capacity by 3, another policy increases capacity by 200
percent, and the current capacity is 10, Application Auto Scaling uses the
policy with the highest calculated capacity (200% of 10 = 20) and scales out to
30.
We recommend caution, however, when using target tracking scaling policies with
step scaling policies because conflicts between these policies can cause
undesirable behavior. For example, if the step scaling policy initiates a
scale-in activity before the target tracking policy is ready to scale in, the
scale-in activity will not be blocked. After the scale-in activity completes,
the target tracking policy could instruct the scalable target to scale out
again.
For more information, see [Target tracking scaling policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html)
and [Step scaling policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-step-scaling-policies.html)
in the *Application Auto Scaling User Guide*.
If a scalable target is deregistered, the scalable target is no longer available
to execute scaling policies. Any scaling policies that were specified for the
scalable target are deleted.
"""
def put_scaling_policy(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutScalingPolicy", input, options)
end
@doc """
Creates or updates a scheduled action for an Application Auto Scaling scalable
target.
Each scalable target is identified by a service namespace, resource ID, and
scalable dimension. A scheduled action applies to the scalable target identified
by those three attributes. You cannot create a scheduled action until you have
registered the resource as a scalable target.
When start and end times are specified with a recurring schedule using a cron
expression or rates, they form the boundaries for when the recurring action
starts and stops.
To update a scheduled action, specify the parameters that you want to change. If
you don't specify start and end times, the old values are deleted.
For more information, see [Scheduled scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html)
in the *Application Auto Scaling User Guide*.
If a scalable target is deregistered, the scalable target is no longer available
to run scheduled actions. Any scheduled actions that were specified for the
scalable target are deleted.
"""
def put_scheduled_action(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "PutScheduledAction", input, options)
end
@doc """
Registers or updates a scalable target.
A scalable target is a resource that Application Auto Scaling can scale out and
scale in. Scalable targets are uniquely identified by the combination of
resource ID, scalable dimension, and namespace.
When you register a new scalable target, you must specify values for minimum and
maximum capacity. Current capacity will be adjusted within the specified range
when scaling starts. Application Auto Scaling scaling policies will not scale
capacity to values that are outside of this range.
After you register a scalable target, you do not need to register it again to
use other Application Auto Scaling operations. To see which resources have been
registered, use
[DescribeScalableTargets](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_DescribeScalableTargets.html). You can also view the scaling policies for a service namespace by using
[DescribeScalableTargets](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_DescribeScalableTargets.html).
If you no longer need a scalable target, you can deregister it by using
[DeregisterScalableTarget](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_DeregisterScalableTarget.html).
To update a scalable target, specify the parameters that you want to change.
Include the parameters that identify the scalable target: resource ID, scalable
dimension, and namespace. Any parameters that you don't specify are not changed
by this update request.
If you call the `RegisterScalableTarget` API to update an existing scalable
target, Application Auto Scaling retrieves the current capacity of the resource.
If it is below the minimum capacity or above the maximum capacity, Application
Auto Scaling adjusts the capacity of the scalable target to place it within
these bounds, even if you don't include the `MinCapacity` or `MaxCapacity`
request parameters.
"""
def register_scalable_target(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "RegisterScalableTarget", input, options)
end
end
|
lib/aws/generated/application_auto_scaling.ex
| 0.930268
| 0.668163
|
application_auto_scaling.ex
|
starcoder
|
defmodule ReadDoc.Options do
use ReadDoc.Types
defstruct begin_trigger:
~s{\\A \\s* <!-- \\s+ begin \\s @doc \\s ([\\w.?!]+) \\s+ --> \\s* \\z },
end_trigger: ~s{\\A \\s* <!-- \\s+ end \\s @doc \\s ([\\w.?!]+) \\s+ --> \\s* \\z },
keep_copy: false,
silent: false,
fix_errors: true,
begin_rgx: nil,
end_rgx: nil
@type t :: %__MODULE__{
begin_trigger: String.t(),
end_trigger: String.t(),
keep_copy: boolean,
silent: boolean,
fix_errors: true,
begin_rgx: maybe(Regex.t()),
end_rgx: maybe(Regex.t())
}
@moduledoc """
## Usage:
mix read_doc [options] files...
Each file is scanned for block of lines starting with `<!-- begin @doc...` and
endifing with `<!-- end @doc...`.
Then the content between two matching lines is replaced with the corresponding docstring.
The following options are implemented
--silent no messages emitted to :stderr (defaults to false)
--keep-copy a copy of the original input file is kept by appending `.bup<n>` where n runs from 1 to the
next available number for which no copy exists yet (defaults to false)
--fix-errors defaults to true! (deactivate via --no-fix-errors), and closing `<!-- end @doc...` lines
with no matching `<!-- begin @doc...` are removed from the input
--begin-trigger defaults to `"\\A \\s* <!-- \\s+ begin \\s @doc \\s ([\\w.?!]+) \\s+ --> \\s* \\z"`.
This values is interpreted as an extended regex indicating the begin of a docstring block, where
the first capture defines the module/function of the docstring
--end-trigger defaults to `"\\A \\s* <!-- \\s+ end \\s @doc \\s ([\\w.?!]+) \\s+ --> \\s* \\z"`.
This values is interpreted as an extended regex indicating the end of a docstring block, where
the first capture defines the module/function of the docstring
"""
@doc """
Creates an Options struct with dependent fields
"""
@spec finalize(t) :: t
def finalize(options) do
%{
options
| begin_rgx: Regex.compile!(options.begin_trigger, "x"),
end_rgx: Regex.compile!(options.end_trigger, "x")
}
end
@spec croak(t, String.t()) :: :ok
def croak(%__MODULE__{silent: true}, _), do: :ok
def croak(%__MODULE__{silent: false}, message), do: IO.puts(:stderr, message)
end
|
lib/read_doc/options.ex
| 0.65368
| 0.55438
|
options.ex
|
starcoder
|
defmodule HSLuv do
@moduledoc """
Convert colors between HSLuv and RGB color spaces
"""
import :math
@min_f 0.00000001
@max_f 99.9999999
@m {
{3.240969941904521, -1.537383177570093, -0.498610760293},
{-0.96924363628087, 1.87596750150772, 0.041555057407175},
{0.055630079696993, -0.20397695888897, 1.056971514242878}
}
@m_inv {
{0.41239079926595, 0.35758433938387, 0.18048078840183},
{0.21263900587151, 0.71516867876775, 0.072192315360733},
{0.019330818715591, 0.11919477979462, 0.95053215224966}
}
@ref_y 1.0
@ref_u 0.19783000664283
@ref_v 0.46831999493879
@kappa 903.2962962
@epsilon 0.0088564516
@enforce_keys [:h, :s, :l]
defstruct @enforce_keys
@doc """
Create an HSLuv color from values
Both integer and floats are supported.
- `h` must be between 0 and 360 included
- `s` must be between 0 and 100 included
- `l` must be between 0 and 100 included
"""
def new(h, s, l) do
%HSLuv{h: h, s: s, l: l}
end
@doc """
Create an HSLuv color from RGB values
Both integer and floats are supported.
- `r` must be between 0 and 255 included
- `g` must be between 0 and 255 included
- `b` must be between 0 and 255 included
## Examples
iex> HSLuv.rgb(200, 150, 20)
%HSLuv{h: 57.26077539223336, l: 65.07659371178795, s: 97.61326139925325}
"""
def rgb(r, g, b) do
{h, s, l} = rgb_to_hsluv({r / 255.0, g / 255.0, b / 255.0})
%HSLuv{h: h, s: s, l: l}
end
@doc """
Convert HSLuv to RGB.
- `h` must be between 0 and 360 included
- `s` must be between 0 and 100 included
- `l` must be between 0 and 100 included
Returned components are between 0 and 255 included
## Examples
iex> HSLuv.to_rgb(20, 50, 20)
{75, 38, 31}
"""
def to_rgb(h, s, l) do
new(h, s, l)
|> to_rgb()
end
def to_rgb(%HSLuv{h: h, s: s, l: l}) do
{r, g, b} = hsluv_to_rgb({h, s, l})
{round(r * 255.0), round(g * 255.0), round(b * 255.0)}
end
@doc """
Convert RGB to HSLuv.
## Examples
iex> HSLuv.to_hsluv(20, 50, 20)
{127.71501294923954, 67.94319276530133, 17.829530512200364}
"""
def to_hsluv(r, g, b) do
c = rgb(r, g, b)
{c.h, c.s, c.l}
end
def hsluv_to_rgb([h, s, l]), do: hsluv_to_rgb({h, s, l})
def hsluv_to_rgb({_h, _s, _l} = hsl) do
hsl
|> hsluv_to_lch()
|> lch_to_luv()
|> luv_to_xyz()
|> xyz_to_rgb()
end
def hpluv_to_rgb([h, s, l]), do: hpluv_to_rgb({h, s, l})
def hpluv_to_rgb({_h, _s, _l} = hsl) do
hsl
|> hpluv_to_lch()
|> lch_to_luv()
|> luv_to_xyz()
|> xyz_to_rgb()
end
def rgb_to_hsluv([r, g, b]), do: rgb_to_hsluv({r, g, b})
def rgb_to_hsluv({_r, _g, _b} = rgb) do
rgb
|> rgb_to_xyz()
|> xyz_to_luv()
|> luv_to_lch()
|> lch_to_hsluv()
end
def rgb_to_hpluv([r, g, b]), do: rgb_to_hpluv({r, g, b})
def rgb_to_hpluv({_r, _g, _b} = rgb) do
rgb
|> rgb_to_xyz()
|> xyz_to_luv()
|> luv_to_lch()
|> lch_to_hpluv()
end
def lch_to_luv({l, c, h}) do
h_rad = h / 360.0 * 2.0 * pi()
{l, cos(h_rad) * c, sin(h_rad) * c}
end
def lch_to_luv([l, c, h]), do: lch_to_luv({l, c, h})
def luv_to_lch({l, u, v}) do
c = sqrt(u * u + v * v)
h =
if c < @min_f do
0.0
else
atan2(v, u) * 180.0 / pi()
end
h =
if h < 0.0 do
360.0 + h
else
h
end
{l, c, h}
end
def luv_to_lch([l, u, v]), do: luv_to_lch({l, u, v})
def xyz_to_rgb({_x, _y, _z} = xyz) do
{m1, m2, m3} = @m
{a, b, c} = {dot(m1, xyz), dot(m2, xyz), dot(m3, xyz)}
{from_linear(a), from_linear(b), from_linear(c)}
end
def xyz_to_rgb([x, y, z]), do: xyz_to_rgb({x, y, z})
def rgb_to_xyz({r, g, b}) do
{m1, m2, m3} = @m_inv
rgb = {to_linear(r), to_linear(g), to_linear(b)}
{dot(m1, rgb), dot(m2, rgb), dot(m3, rgb)}
end
def rgb_to_xyz([r, g, b]), do: rgb_to_xyz({r, g, b})
def xyz_to_luv({x, y, z}) do
l = f(y)
if l == 0.0 || (x == 0.0 && y == 0.0 && z == 0.0) do
{0.0, 0.0, 0.0}
else
var_u = 4.0 * x / (x + 15.0 * y + 3.0 * z)
var_v = 9.0 * y / (x + 15.0 * y + 3.0 * z)
u = 13.0 * l * (var_u - @ref_u)
v = 13.0 * l * (var_v - @ref_v)
{l, u, v}
end
end
def xyz_to_luv([x, y, z]), do: xyz_to_luv({x, y, z})
def luv_to_xyz({l, u, v}) do
if l == 0.0 do
{0.0, 0.0, 0.0}
else
var_y = f_inv(l)
var_u = u / (13.0 * l) + @ref_u
var_v = v / (13.0 * l) + @ref_v
y = var_y * @ref_y
x = 0.0 - 9.0 * y * var_u / ((var_u - 4.0) * var_v - var_u * var_v)
z = (9.0 * y - 15.0 * var_v * y - var_v * x) / (3.0 * var_v)
{x, y, z}
end
end
def luv_to_xyz([l, u, v]), do: luv_to_xyz({l, u, v})
def hsluv_to_lch({h, s, l}) do
cond do
l > @max_f ->
{100.0, 0, h}
l < @min_f ->
{0.0, 0.0, h}
true ->
{l, max_safe_chroma_for_lh(l, h) / 100.0 * s, h}
end
end
def hsluv_to_lch([h, s, l]), do: hsluv_to_lch({h, s, l})
def lch_to_hsluv({l, c, h}) do
cond do
l > @max_f ->
{h, 0, 100.0}
l < @min_f ->
{h, 0.0, 0.0}
true ->
max_chroma = max_safe_chroma_for_lh(l, h)
{h, c / max_chroma * 100.0, l}
end
end
def lch_to_hsluv([l, c, h]), do: lch_to_hsluv({l, c, h})
def hpluv_to_lch({h, s, l}) do
cond do
l > @max_f ->
{100.0, 0, h}
l < @min_f ->
{0.0, 0.0, h}
true ->
{l, max_safe_chroma_for_l(l) / 100.0 * s, h}
end
end
def hpluv_to_lch([h, s, l]), do: hpluv_to_lch({h, s, l})
def lch_to_hpluv({l, c, h}) do
cond do
l > @max_f ->
{h, 0.0, 100.0}
l < @min_f ->
{h, 0.0, 0.0}
true ->
{h, c / max_safe_chroma_for_l(l) * 100.0, l}
end
end
def lch_to_hpluv([l, c, h]), do: lch_to_hpluv({l, c, h})
def get_bounds(l) do
sub = pow(l + 16.0, 3.0) / 1_560_896.0
sub =
if sub > @epsilon do
sub
else
l / @kappa
end
compute = fn {m1, m2, m3}, t ->
top1 = (284_517.0 * m1 - 94839.0 * m3) * sub
top2 =
(838_422.0 * m3 + 769_860.0 * m2 + 731_718.0 * m1) * l * sub -
769_860.0 * t * l
bottom = (632_260.0 * m3 - 126_452.0 * m2) * sub + 126_452.0 * t
{top1 / bottom, top2 / bottom}
end
{m1, m2, m3} = @m
[
compute.(m1, 0.0),
compute.(m1, 1.0),
compute.(m2, 0.0),
compute.(m2, 1.0),
compute.(m3, 0.0),
compute.(m3, 1.0)
]
end
def max_safe_chroma_for_l(l) do
val = 1.7976931348623157e308
l
|> get_bounds()
|> Enum.reduce(val, fn bound, val ->
length = distance_line_from_origin(bound)
if length >= 0.0 do
min(val, length)
else
val
end
end)
end
def max_safe_chroma_for_lh(l, h) do
h_rad = h / 360.0 * pi() * 2.0
val = 1.7976931348623157e308
l
|> get_bounds()
|> Enum.reduce(val, fn bound, val ->
length = length_of_ray_until_intersect(h_rad, bound)
if length >= 0.0 do
min(val, length)
else
val
end
end)
end
def distance_line_from_origin({slope, intercept}) do
abs(intercept) / sqrt(pow(slope, 2.0) + 1.0)
end
def length_of_ray_until_intersect(theta, {slope, intercept}) do
intercept / (sin(theta) - slope * cos(theta))
end
def dot({a0, a1, a2}, {b0, b1, b2}) do
a0 * b0 + a1 * b1 + a2 * b2
end
defp f(t) do
if t > @epsilon do
116.0 * pow(t / @ref_y, 1.0 / 3.0) - 16.0
else
t / @ref_y * @kappa
end
end
defp f_inv(t) do
if t > 8 do
@ref_y * pow((t + 16.0) / 116.0, 3.0)
else
@ref_y * t / @kappa
end
end
defp to_linear(c) do
if c > 0.04045 do
pow((c + 0.055) / 1.055, 2.4)
else
c / 12.92
end
end
defp from_linear(c) do
if c <= 0.0031308 do
12.92 * c
else
1.055 * pow(c, 1.0 / 2.4) - 0.055
end
end
end
|
lib/hsluv.ex
| 0.912924
| 0.463869
|
hsluv.ex
|
starcoder
|
defmodule Exconfig do
@moduledoc """
The module _Exconfig_ provides the API for the Exconfig-package.
This is
- `get/0` ... get all cached settings
- `get/3` ... get a specific entry (read if not cached) [macro]
- and `clear_cache!/0` ... remove all entries from the cache
All usage of `Exconfig.get` will be captured and written to
`configuration.log` at termination.
### Configuration
config :exconfig, config_log_file: "configuration.log"
When you compile your application for any evnironment but `prod`,
the macro will record the usage of it. In production environment
this doesn't happen and using the macro is a straight forward call
to `Exconfig._get(env, key, default)`
### configuration.log
`configuration.log` can be used as a template for creating your `setup.env` file.
"""
require Logger
require Exconfig.ConfigLogger
alias Exconfig.Cache
@doc """
The macro records the usage of `get` with the `Exconfig.ConfigLogger` module
and then reads the configuration as usual.
"""
defmacro get(env, key, default \\ nil) do
unless Mix.env() == :prod, do: Exconfig.ConfigLogger.record_usage(env, key, default)
quote do
e = unquote(env)
k = unquote(key)
d = unquote(default)
Exconfig._get(e, k, d)
end
end
@doc """
Get the entire loaded cache.
## Examples
iex> Exconfig.get()
%{}
"""
def get do
Cache.get()
end
@doc """
Remove all entries from the cache, so they will be re-read if accessed
again.
"""
def clear_cache!() do
Cache.clear!()
end
@doc """
Get a value from cache or load it if not cached yet.
## Examples
### Return the default if key is not found anywhere
iex> Exconfig.get(:exconfig, :unknown_config_key, :not_found)
:not_found
### Return from a value configured in `config/*`
iex> Exconfig.get(:logger, :level, :error)
:debug
### Return a value provided as a system environment var
The given key will be converted to a string, if it is an `:atom`.
Also, it will be uppercased.
iex> System.put_env("ELIXIRRULEZ", "true")
iex> Exconfig.get(:exconfig, :elixirrulez, :not_found)
"true"
"""
def _get(application_key, key, default \\ nil) do
lookup(application_key, key, default)
|> load()
|> loaded_or_default()
end
defp lookup(application_key, key, default) do
case Cache.lookup(application_key, key, default) do
{:error, :key_not_found} -> {:not_loaded, application_key, key, default}
value -> {:cached, application_key, key, value}
end
end
defp load({:cached, application_key, key, value}), do: {:cached, application_key, key, value}
defp load({_state, _application_key, _key, _default} = args) do
args
|> load_application_env()
|> load_system_env()
|> update_cache()
end
defp update_cache({:not_loaded, _application_key, _key, _value} = args), do: args
defp update_cache({:loaded, application_key, key, value}) do
Cache.update(application_key, key, value)
{:cached, application_key, key, value}
end
defp load_application_env({state, application_key, key, default}) do
case Application.get_env(application_key, key) do
nil -> {state, application_key, key, default}
value -> {:loaded, application_key, key, value}
end
end
defp load_system_env({state, application_key, key, default}) do
case System.get_env(normalize_env_key(key)) do
nil -> {state, application_key, key, default}
value -> {:loaded, application_key, key, value}
end
end
defp normalize_env_key(key) when is_atom(key), do: to_string(key) |> normalize_env_key()
defp normalize_env_key(key) when is_binary(key) do
key
|> String.upcase()
end
defp loaded_or_default({_, _, _, value_or_default}), do: value_or_default
end
|
lib/exconfig.ex
| 0.85315
| 0.406273
|
exconfig.ex
|
starcoder
|
defmodule StepFlow.WorkflowDefinitions do
@moduledoc """
The WorkflowDefinitions context.
"""
import Ecto.Query, warn: false
alias StepFlow.Repo
alias StepFlow.WorkflowDefinitions.WorkflowDefinition
require Logger
@doc """
Returns the list of Workflow Definitions.
"""
def list_workflow_definitions(params \\ %{}) do
page =
Map.get(params, "page", 0)
|> StepFlow.Integer.force()
size =
Map.get(params, "size", 10)
|> StepFlow.Integer.force()
mode = Map.get(params, "mode", "full")
offset = page * size
query =
from(workflow_definition in WorkflowDefinition)
|> check_rights(Map.get(params, "right_action"), Map.get(params, "rights"))
|> filter_by_label_or_identifier(Map.get(params, "search"))
|> filter_by_versions(Map.get(params, "versions"))
|> select_by_mode(mode)
total_query = from(item in subquery(query), select: count(item.id))
total =
Repo.all(total_query)
|> List.first()
query =
from(
workflow_definition in subquery(query),
order_by: [
desc: workflow_definition.version_major,
desc: workflow_definition.version_minor,
desc: workflow_definition.version_micro
]
)
|> paginate(offset, size)
workflow_definitions = Repo.all(query)
%{
data: workflow_definitions,
total: total,
page: page,
size: size,
mode: mode
}
end
defp paginate(query, offset, size) do
case size do
-1 ->
query
_ ->
from(
workflow_definition in subquery(query),
offset: ^offset,
limit: ^size
)
end
end
defp check_rights(query, right_action, user_rights) do
case {right_action, user_rights} do
{nil, _} ->
query
{_, nil} ->
query
{right_action, user_rights} ->
from(
workflow_definition in subquery(query),
join: rights in assoc(workflow_definition, :rights),
where: rights.action == ^right_action,
where: fragment("?::varchar[] && ?::varchar[]", rights.groups, ^user_rights)
)
end
end
def filter_by_versions(query, versions) do
case versions do
["latest"] ->
from(
workflow_definition in subquery(query),
order_by: [
desc: workflow_definition.version_major,
desc: workflow_definition.version_minor,
desc: workflow_definition.version_micro
],
distinct: :identifier
)
versions when is_list(versions) and length(versions) != 0 ->
from(
workflow_definition in subquery(query),
where:
fragment(
"concat(?, '.', ?, '.', ?) = ANY(?)",
workflow_definition.version_major,
workflow_definition.version_minor,
workflow_definition.version_micro,
^versions
)
)
_ ->
query
end
end
defp filter_by_label_or_identifier(query, search) do
case search do
nil ->
query
search ->
like = "%#{search}%"
from(
workflow_definition in subquery(query),
where:
ilike(workflow_definition.label, ^like) or
ilike(workflow_definition.identifier, ^search)
)
end
end
defp select_by_mode(query, mode) do
case mode do
"simple" ->
from(
workflow_definition in subquery(query),
select: %{
id: workflow_definition.id,
identifier: workflow_definition.identifier,
label: workflow_definition.label,
version_major: workflow_definition.version_major,
version_minor: workflow_definition.version_minor,
version_micro: workflow_definition.version_micro
}
)
"full" ->
query
_ ->
query
end
end
@doc """
Gets a single WorkflowDefinition.
Raises `Ecto.NoResultsError` if the WorkflowDefinition does not exist.
"""
def get_workflow_definition(identifier) do
query =
from(workflow_definition in WorkflowDefinition,
preload: [:rights],
where: workflow_definition.identifier == ^identifier,
order_by: [
desc: workflow_definition.version_major,
desc: workflow_definition.version_minor,
desc: workflow_definition.version_micro
],
limit: 1
)
Repo.one(query)
end
end
|
lib/step_flow/workflow_definitions/workflow_definitions.ex
| 0.746693
| 0.531757
|
workflow_definitions.ex
|
starcoder
|
defmodule CFXXL.CertUtils do
@moduledoc """
A module containing utility functions to extract informations from PEM certificates
"""
@aki_oid {2, 5, 29, 35}
@common_name_oid {2, 5, 4, 3}
@z_char 90
require Record
Record.defrecordp(
:certificate,
:Certificate,
Record.extract(:Certificate, from_lib: "public_key/include/public_key.hrl")
)
Record.defrecordp(
:tbs_certificate,
:TBSCertificate,
Record.extract(:TBSCertificate, from_lib: "public_key/include/public_key.hrl")
)
Record.defrecordp(
:extension,
:Extension,
Record.extract(:Extension, from_lib: "public_key/include/public_key.hrl")
)
Record.defrecordp(
:authority_key_identifier,
:AuthorityKeyIdentifier,
Record.extract(:AuthorityKeyIdentifier, from_lib: "public_key/include/public_key.hrl")
)
Record.defrecordp(
:attribute_type_and_value,
:AttributeTypeAndValue,
Record.extract(:AttributeTypeAndValue, from_lib: "public_key/include/public_key.hrl")
)
Record.defrecordp(
:validity,
:Validity,
Record.extract(:Validity, from_lib: "public_key/include/public_key.hrl")
)
@doc """
Extracts the serial number of a certificate.
`cert` must be a string containing a PEM encoded certificate.
Returns the serial number as string or raises if there's an error.
"""
def serial_number!(cert) do
cert
|> tbs()
|> tbs_certificate(:serialNumber)
|> to_string()
end
@doc """
Extracts the authority key identifier of a certificate.
`cert` must be a string containing a PEM encoded certificate.
Returns the authority key identifier as string or raises if
it doesn't find one or there's an error.
"""
def authority_key_identifier!(cert) do
extensions =
cert
|> tbs()
|> tbs_certificate(:extensions)
|> Enum.map(fn x -> extension(x) end)
case Enum.find(extensions, fn ext -> ext[:extnID] == @aki_oid end) do
nil ->
raise "no AuthorityKeyIdentifier in certificate"
aki_extension ->
:public_key.der_decode(:AuthorityKeyIdentifier, aki_extension[:extnValue])
|> authority_key_identifier(:keyIdentifier)
|> Base.encode16(case: :lower)
end
end
@doc """
Extracts the Common Name of a certificate.
`cert` must be a string containing a PEM encoded certificate.
Returns the Common Name as string or nil if it doesn't find one, raises if there's an error.
"""
def common_name!(cert) do
{:rdnSequence, subject_attributes} =
cert
|> tbs()
|> tbs_certificate(:subject)
common_name =
subject_attributes
|> Enum.map(fn [list_wrapped_attr] -> attribute_type_and_value(list_wrapped_attr) end)
|> Enum.find(fn attr -> attr[:type] == @common_name_oid end)
if common_name do
case :public_key.der_decode(:X520CommonName, common_name[:value]) do
{:printableString, cn} ->
to_string(cn)
{:utf8String, cn} ->
to_string(cn)
end
else
nil
end
end
@doc """
Extracts the not_after field (expiration) of a certificate.
`cert` must be a string containing a PEM encoded certificate.
Returns not_after as `DateTime` or raises if there's an error.
"""
def not_after!(cert) do
cert
|> tbs()
|> tbs_certificate(:validity)
|> validity(:notAfter)
|> cert_time_tuple_to_datetime()
end
@doc """
Extracts the not_before field of a certificate.
`cert` must be a string containing a PEM encoded certificate.
Returns not_before as `DateTime` or raises if there's an error.
"""
def not_before!(cert) do
cert
|> tbs()
|> tbs_certificate(:validity)
|> validity(:notBefore)
|> cert_time_tuple_to_datetime()
end
defp cert_time_tuple_to_datetime({:utcTime, [y0, y1 | _rest] = time_charlist}) do
short_year = parse_charlist_int([y0, y1])
prefix =
if short_year >= 50 do
'19'
else
'20'
end
cert_time_tuple_to_datetime({:generalTime, prefix ++ time_charlist})
end
defp cert_time_tuple_to_datetime(
{_, [y0, y1, y2, y3, m0, m1, d0, d1, h0, h1, mn0, mn1, s0, s1, @z_char]}
) do
year = parse_charlist_int([y0, y1, y2, y3])
month = parse_charlist_int([m0, m1])
day = parse_charlist_int([d0, d1])
hour = parse_charlist_int([h0, h1])
minute = parse_charlist_int([mn0, mn1])
second = parse_charlist_int([s0, s1])
{:ok, naive} = NaiveDateTime.new(year, month, day, hour, minute, second)
DateTime.from_naive!(naive, "Etc/UTC")
end
defp parse_charlist_int(charlist) do
{parsed, ""} =
charlist
|> to_string()
|> Integer.parse()
parsed
end
defp tbs(cert) do
cert
|> :public_key.pem_decode()
|> hd()
|> :public_key.pem_entry_decode()
|> certificate(:tbsCertificate)
end
end
|
lib/cfxxl/cert_utils.ex
| 0.871174
| 0.491395
|
cert_utils.ex
|
starcoder
|
defmodule AWS.GameLift do
@moduledoc """
Amazon GameLift Service
GameLift provides solutions for hosting session-based multiplayer game
servers in the cloud, including tools for deploying, operating, and scaling
game servers. Built on AWS global computing infrastructure, GameLift helps
you deliver high-performance, high-reliability, low-cost game servers while
dynamically scaling your resource usage to meet player demand.
**About GameLift solutions**
Get more information on these GameLift solutions in the [Amazon GameLift
Developer
Guide](http://docs.aws.amazon.com/gamelift/latest/developerguide/).
<ul> <li> Managed GameLift -- GameLift offers a fully managed service to
set up and maintain computing machines for hosting, manage game session and
player session life cycle, and handle security, storage, and performance
tracking. You can use automatic scaling tools to balance hosting costs
against meeting player demand., configure your game session management to
minimize player latency, or add FlexMatch for matchmaking.
</li> <li> Managed GameLift with Realtime Servers – With GameLift Realtime
Servers, you can quickly configure and set up game servers for your game.
Realtime Servers provides a game server framework with core Amazon GameLift
infrastructure already built in.
</li> <li> GameLift FleetIQ – Use GameLift FleetIQ as a standalone feature
while managing your own EC2 instances and Auto Scaling groups for game
hosting. GameLift FleetIQ provides optimizations that make low-cost Spot
Instances viable for game hosting.
</li> </ul> **About this API Reference**
This reference guide describes the low-level service API for Amazon
GameLift. You can find links to language-specific SDK guides and the AWS
CLI reference with each operation and data type topic. Useful links:
<ul> <li> [GameLift API operations listed by
tasks](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-awssdk.html)
</li> <li> [ GameLift tools and
resources](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-components.html)
</li> </ul>
"""
@doc """
Registers a player's acceptance or rejection of a proposed FlexMatch match.
A matchmaking configuration may require player acceptance; if so, then
matches built with that configuration cannot be completed unless all
players accept the proposed match within a specified time limit.
When FlexMatch builds a match, all the matchmaking tickets involved in the
proposed match are placed into status `REQUIRES_ACCEPTANCE`. This is a
trigger for your game to get acceptance from all players in the ticket.
Acceptances are only valid for tickets when they are in this status; all
other acceptances result in an error.
To register acceptance, specify the ticket ID, a response, and one or more
players. Once all players have registered acceptance, the matchmaking
tickets advance to status `PLACING`, where a new game session is created
for the match.
If any player rejects the match, or if acceptances are not received before
a specified timeout, the proposed match is dropped. The matchmaking tickets
are then handled in one of two ways: For tickets where one or more players
rejected the match, the ticket status is returned to `SEARCHING` to find a
new match. For tickets where one or more players failed to respond, the
ticket status is set to `CANCELLED`, and processing is terminated. A new
matchmaking request for these players can be submitted as needed.
**Learn more**
[ Add FlexMatch to a Game
Client](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-client.html)
[ FlexMatch Events
Reference](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-events.html)
**Related operations**
<ul> <li> `StartMatchmaking`
</li> <li> `DescribeMatchmaking`
</li> <li> `StopMatchmaking`
</li> <li> `AcceptMatch`
</li> <li> `StartMatchBackfill`
</li> </ul>
"""
def accept_match(client, input, options \\ []) do
request(client, "AcceptMatch", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Locates an available game server and temporarily reserves it to host
gameplay and players. This operation is called from a game client or client
service (such as a matchmaker) to request hosting resources for a new game
session. In response, GameLift FleetIQ locates an available game server,
places it in `CLAIMED` status for 60 seconds, and returns connection
information that players can use to connect to the game server.
To claim a game server, identify a game server group. You can also specify
a game server ID, although this approach bypasses GameLift FleetIQ
placement optimization. Optionally, include game data to pass to the game
server at the start of a game session, such as a game map or player
information.
When a game server is successfully claimed, connection information is
returned. A claimed game server's utilization status remains `AVAILABLE`
while the claim status is set to `CLAIMED` for up to 60 seconds. This time
period gives the game server time to update its status to `UTILIZED` (using
`UpdateGameServer`) once players join. If the game server's status is not
updated within 60 seconds, the game server reverts to unclaimed status and
is available to be claimed by another request. The claim time period is a
fixed value and is not configurable.
If you try to claim a specific game server, this request will fail in the
following cases:
<ul> <li> If the game server utilization status is `UTILIZED`.
</li> <li> If the game server claim status is `CLAIMED`.
</li> </ul> <note> When claiming a specific game server, this request will
succeed even if the game server is running on an instance in `DRAINING`
status. To avoid this, first check the instance status by calling
`DescribeGameServerInstances`.
</note> **Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `RegisterGameServer`
</li> <li> `ListGameServers`
</li> <li> `ClaimGameServer`
</li> <li> `DescribeGameServer`
</li> <li> `UpdateGameServer`
</li> <li> `DeregisterGameServer`
</li> </ul>
"""
def claim_game_server(client, input, options \\ []) do
request(client, "ClaimGameServer", input, options)
end
@doc """
Creates an alias for a fleet. In most situations, you can use an alias ID
in place of a fleet ID. An alias provides a level of abstraction for a
fleet that is useful when redirecting player traffic from one fleet to
another, such as when updating your game build.
Amazon GameLift supports two types of routing strategies for aliases:
simple and terminal. A simple alias points to an active fleet. A terminal
alias is used to display messaging or link to a URL instead of routing
players to an active fleet. For example, you might use a terminal alias
when a game version is no longer supported and you want to direct players
to an upgrade site.
To create a fleet alias, specify an alias name, routing strategy, and
optional description. Each simple alias can point to only one fleet, but a
fleet can have multiple aliases. If successful, a new alias record is
returned, including an alias ID and an ARN. You can reassign an alias to
another fleet by calling `UpdateAlias`.
<ul> <li> `CreateAlias`
</li> <li> `ListAliases`
</li> <li> `DescribeAlias`
</li> <li> `UpdateAlias`
</li> <li> `DeleteAlias`
</li> <li> `ResolveAlias`
</li> </ul>
"""
def create_alias(client, input, options \\ []) do
request(client, "CreateAlias", input, options)
end
@doc """
Creates a new Amazon GameLift build resource for your game server binary
files. Game server binaries must be combined into a zip file for use with
Amazon GameLift.
<important> When setting up a new game build for GameLift, we recommend
using the AWS CLI command **
[upload-build](https://docs.aws.amazon.com/cli/latest/reference/gamelift/upload-build.html)
**. This helper command combines two tasks: (1) it uploads your build files
from a file directory to a GameLift Amazon S3 location, and (2) it creates
a new build resource.
</important> The `CreateBuild` operation can used in the following
scenarios:
<ul> <li> To create a new game build with build files that are in an S3
location under an AWS account that you control. To use this option, you
must first give Amazon GameLift access to the S3 bucket. With permissions
in place, call `CreateBuild` and specify a build name, operating system,
and the S3 storage location of your game build.
</li> <li> To directly upload your build files to a GameLift S3 location.
To use this option, first call `CreateBuild` and specify a build name and
operating system. This operation creates a new build resource and also
returns an S3 location with temporary access credentials. Use the
credentials to manually upload your build files to the specified S3
location. For more information, see [Uploading
Objects](https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html)
in the *Amazon S3 Developer Guide*. Build files can be uploaded to the
GameLift S3 location once only; that can't be updated.
</li> </ul> If successful, this operation creates a new build resource with
a unique build ID and places it in `INITIALIZED` status. A build must be in
`READY` status before you can create fleets with it.
**Learn more**
[Uploading Your
Game](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html)
[ Create a Build with Files in Amazon
S3](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-cli-uploading.html#gamelift-build-cli-uploading-create-build)
**Related operations**
<ul> <li> `CreateBuild`
</li> <li> `ListBuilds`
</li> <li> `DescribeBuild`
</li> <li> `UpdateBuild`
</li> <li> `DeleteBuild`
</li> </ul>
"""
def create_build(client, input, options \\ []) do
request(client, "CreateBuild", input, options)
end
@doc """
Creates a new fleet to run your game servers. whether they are custom game
builds or Realtime Servers with game-specific script. A fleet is a set of
Amazon Elastic Compute Cloud (Amazon EC2) instances, each of which can host
multiple game sessions. When creating a fleet, you choose the hardware
specifications, set some configuration options, and specify the game server
to deploy on the new fleet.
To create a new fleet, provide the following: (1) a fleet name, (2) an EC2
instance type and fleet type (spot or on-demand), (3) the build ID for your
game build or script ID if using Realtime Servers, and (4) a runtime
configuration, which determines how game servers will run on each instance
in the fleet.
If the `CreateFleet` call is successful, Amazon GameLift performs the
following tasks. You can track the process of a fleet by checking the fleet
status or by monitoring fleet creation events:
<ul> <li> Creates a fleet resource. Status: `NEW`.
</li> <li> Begins writing events to the fleet event log, which can be
accessed in the Amazon GameLift console.
</li> <li> Sets the fleet's target capacity to 1 (desired instances), which
triggers Amazon GameLift to start one new EC2 instance.
</li> <li> Downloads the game build or Realtime script to the new instance
and installs it. Statuses: `DOWNLOADING`, `VALIDATING`, `BUILDING`.
</li> <li> Starts launching server processes on the instance. If the fleet
is configured to run multiple server processes per instance, Amazon
GameLift staggers each process launch by a few seconds. Status:
`ACTIVATING`.
</li> <li> Sets the fleet's status to `ACTIVE` as soon as one server
process is ready to host a game session.
</li> </ul> **Learn more**
[Setting Up
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
[Debug Fleet Creation
Issues](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-creating-debug.html#fleets-creating-debug-creation)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def create_fleet(client, input, options \\ []) do
request(client, "CreateFleet", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Creates a GameLift FleetIQ game server group for managing game hosting on a
collection of Amazon EC2 instances for game hosting. This operation creates
the game server group, creates an Auto Scaling group in your AWS account,
and establishes a link between the two groups. You can view the status of
your game server groups in the GameLift console. Game server group metrics
and events are emitted to Amazon CloudWatch.
Before creating a new game server group, you must have the following:
<ul> <li> An Amazon EC2 launch template that specifies how to launch Amazon
EC2 instances with your game server build. For more information, see [
Launching an Instance from a Launch
Template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html)
in the *Amazon EC2 User Guide*.
</li> <li> An IAM role that extends limited access to your AWS account to
allow GameLift FleetIQ to create and interact with the Auto Scaling group.
For more information, see [Create IAM roles for cross-service
interaction](https://docs.aws.amazon.com/gamelift/latest/developerguide/gsg-iam-permissions-roles.html)
in the *GameLift FleetIQ Developer Guide*.
</li> </ul> To create a new game server group, specify a unique group name,
IAM role and Amazon EC2 launch template, and provide a list of instance
types that can be used in the group. You must also set initial maximum and
minimum limits on the group's instance count. You can optionally set an
Auto Scaling policy with target tracking based on a GameLift FleetIQ
metric.
Once the game server group and corresponding Auto Scaling group are
created, you have full access to change the Auto Scaling group's
configuration as needed. Several properties that are set when creating a
game server group, including maximum/minimum size and auto-scaling policy
settings, must be updated directly in the Auto Scaling group. Keep in mind
that some Auto Scaling group properties are periodically updated by
GameLift FleetIQ as part of its balancing activities to optimize for
availability and cost.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def create_game_server_group(client, input, options \\ []) do
request(client, "CreateGameServerGroup", input, options)
end
@doc """
Creates a multiplayer game session for players. This operation creates a
game session record and assigns an available server process in the
specified fleet to host the game session. A fleet must have an `ACTIVE`
status before a game session can be created in it.
To create a game session, specify either fleet ID or alias ID and indicate
a maximum number of players to allow in the game session. You can also
provide a name and game-specific properties for this game session. If
successful, a `GameSession` object is returned containing the game session
properties and other settings you specified.
**Idempotency tokens.** You can add a token that uniquely identifies game
session requests. This is useful for ensuring that game session requests
are idempotent. Multiple requests with the same idempotency token are
processed only once; subsequent requests return the original result. All
response values are the same with the exception of game session status,
which may change.
**Resource creation limits.** If you are creating a game session on a fleet
with a resource creation limit policy in force, then you must specify a
creator ID. Without this ID, Amazon GameLift has no way to evaluate the
policy for this new game session request.
**Player acceptance policy.** By default, newly created game sessions are
open to new players. You can restrict new player access by using
`UpdateGameSession` to change the game session's player session creation
policy.
**Game session logs.** Logs are retained for all active game sessions for
14 days. To access the logs, call `GetGameSessionLogUrl` to download the
log files.
*Available in Amazon GameLift Local.*
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def create_game_session(client, input, options \\ []) do
request(client, "CreateGameSession", input, options)
end
@doc """
Establishes a new queue for processing requests to place new game sessions.
A queue identifies where new game sessions can be hosted -- by specifying a
list of destinations (fleets or aliases) -- and how long requests can wait
in the queue before timing out. You can set up a queue to try to place game
sessions on fleets in multiple Regions. To add placement requests to a
queue, call `StartGameSessionPlacement` and reference the queue name.
**Destination order.** When processing a request for a game session, Amazon
GameLift tries each destination in order until it finds one with available
resources to host the new game session. A queue's default order is
determined by how destinations are listed. The default order is overridden
when a game session placement request provides player latency information.
Player latency information enables Amazon GameLift to prioritize
destinations where players report the lowest average latency, as a result
placing the new game session where the majority of players will have the
best possible gameplay experience.
**Player latency policies.** For placement requests containing player
latency information, use player latency policies to protect individual
players from very high latencies. With a latency cap, even when a
destination can deliver a low latency for most players, the game is not
placed where any individual player is reporting latency higher than a
policy's maximum. A queue can have multiple latency policies, which are
enforced consecutively starting with the policy with the lowest latency
cap. Use multiple policies to gradually relax latency controls; for
example, you might set a policy with a low latency cap for the first 60
seconds, a second policy with a higher cap for the next 60 seconds, etc.
To create a new queue, provide a name, timeout value, a list of
destinations and, if desired, a set of latency policies. If successful, a
new queue object is returned.
**Learn more**
[ Design a Game Session
Queue](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-design.html)
[ Create a Game Session
Queue](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-creating.html)
**Related operations**
<ul> <li> `CreateGameSessionQueue`
</li> <li> `DescribeGameSessionQueues`
</li> <li> `UpdateGameSessionQueue`
</li> <li> `DeleteGameSessionQueue`
</li> </ul>
"""
def create_game_session_queue(client, input, options \\ []) do
request(client, "CreateGameSessionQueue", input, options)
end
@doc """
Defines a new matchmaking configuration for use with FlexMatch. A
matchmaking configuration sets out guidelines for matching players and
getting the matches into games. You can set up multiple matchmaking
configurations to handle the scenarios needed for your game. Each
matchmaking ticket (`StartMatchmaking` or `StartMatchBackfill`) specifies a
configuration for the match and provides player attributes to support the
configuration being used.
To create a matchmaking configuration, at a minimum you must specify the
following: configuration name; a rule set that governs how to evaluate
players and find acceptable matches; a game session queue to use when
placing a new game session for the match; and the maximum time allowed for
a matchmaking attempt.
To track the progress of matchmaking tickets, set up an Amazon Simple
Notification Service (SNS) to receive notifications, and provide the topic
ARN in the matchmaking configuration. An alternative method, continuously
poling ticket status with `DescribeMatchmaking`, should only be used for
games in development with low matchmaking usage.
**Learn more**
[ Design a FlexMatch
Matchmaker](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-configuration.html)
[ Set Up FlexMatch Event
Notification](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-notification.html)
**Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def create_matchmaking_configuration(client, input, options \\ []) do
request(client, "CreateMatchmakingConfiguration", input, options)
end
@doc """
Creates a new rule set for FlexMatch matchmaking. A rule set describes the
type of match to create, such as the number and size of teams. It also sets
the parameters for acceptable player matches, such as minimum skill level
or character type. A rule set is used by a `MatchmakingConfiguration`.
To create a matchmaking rule set, provide unique rule set name and the rule
set body in JSON format. Rule sets must be defined in the same Region as
the matchmaking configuration they are used with.
Since matchmaking rule sets cannot be edited, it is a good idea to check
the rule set syntax using `ValidateMatchmakingRuleSet` before creating a
new rule set.
**Learn more**
<ul> <li> [Build a Rule
Set](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-rulesets.html)
</li> <li> [Design a
Matchmaker](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-configuration.html)
</li> <li> [Matchmaking with
FlexMatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-intro.html)
</li> </ul> **Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def create_matchmaking_rule_set(client, input, options \\ []) do
request(client, "CreateMatchmakingRuleSet", input, options)
end
@doc """
Reserves an open player slot in an active game session. Before a player can
be added, a game session must have an `ACTIVE` status, have a creation
policy of `ALLOW_ALL`, and have an open player slot. To add a group of
players to a game session, use `CreatePlayerSessions`. When the player
connects to the game server and references a player session ID, the game
server contacts the Amazon GameLift service to validate the player
reservation and accept the player.
To create a player session, specify a game session ID, player ID, and
optionally a string of player data. If successful, a slot is reserved in
the game session for the player and a new `PlayerSession` object is
returned. Player sessions cannot be updated.
*Available in Amazon GameLift Local.*
<ul> <li> `CreatePlayerSession`
</li> <li> `CreatePlayerSessions`
</li> <li> `DescribePlayerSessions`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def create_player_session(client, input, options \\ []) do
request(client, "CreatePlayerSession", input, options)
end
@doc """
Reserves open slots in a game session for a group of players. Before
players can be added, a game session must have an `ACTIVE` status, have a
creation policy of `ALLOW_ALL`, and have an open player slot. To add a
single player to a game session, use `CreatePlayerSession`. When a player
connects to the game server and references a player session ID, the game
server contacts the Amazon GameLift service to validate the player
reservation and accept the player.
To create player sessions, specify a game session ID, a list of player IDs,
and optionally a set of player data strings. If successful, a slot is
reserved in the game session for each player and a set of new
`PlayerSession` objects is returned. Player sessions cannot be updated.
*Available in Amazon GameLift Local.*
<ul> <li> `CreatePlayerSession`
</li> <li> `CreatePlayerSessions`
</li> <li> `DescribePlayerSessions`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def create_player_sessions(client, input, options \\ []) do
request(client, "CreatePlayerSessions", input, options)
end
@doc """
Creates a new script record for your Realtime Servers script. Realtime
scripts are JavaScript that provide configuration settings and optional
custom game logic for your game. The script is deployed when you create a
Realtime Servers fleet to host your game sessions. Script logic is executed
during an active game session.
To create a new script record, specify a script name and provide the script
file(s). The script files and all dependencies must be zipped into a single
file. You can pull the zip file from either of these locations:
<ul> <li> A locally available directory. Use the *ZipFile* parameter for
this option.
</li> <li> An Amazon Simple Storage Service (Amazon S3) bucket under your
AWS account. Use the *StorageLocation* parameter for this option. You'll
need to have an Identity Access Management (IAM) role that allows the
Amazon GameLift service to access your S3 bucket.
</li> </ul> If the call is successful, a new script record is created with
a unique script ID. If the script file is provided as a local file, the
file is uploaded to an Amazon GameLift-owned S3 bucket and the script
record's storage location reflects this location. If the script file is
provided as an S3 bucket, Amazon GameLift accesses the file at this storage
location as needed for deployment.
**Learn more**
[Amazon GameLift Realtime
Servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/realtime-intro.html)
[Set Up a Role for Amazon GameLift
Access](https://docs.aws.amazon.com/gamelift/latest/developerguide/setting-up-role.html)
**Related operations**
<ul> <li> `CreateScript`
</li> <li> `ListScripts`
</li> <li> `DescribeScript`
</li> <li> `UpdateScript`
</li> <li> `DeleteScript`
</li> </ul>
"""
def create_script(client, input, options \\ []) do
request(client, "CreateScript", input, options)
end
@doc """
Requests authorization to create or delete a peer connection between the
VPC for your Amazon GameLift fleet and a virtual private cloud (VPC) in
your AWS account. VPC peering enables the game servers on your fleet to
communicate directly with other AWS resources. Once you've received
authorization, call `CreateVpcPeeringConnection` to establish the peering
connection. For more information, see [VPC Peering with Amazon GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html).
You can peer with VPCs that are owned by any AWS account you have access
to, including the account that you use to manage your Amazon GameLift
fleets. You cannot peer with VPCs that are in different Regions.
To request authorization to create a connection, call this operation from
the AWS account with the VPC that you want to peer to your Amazon GameLift
fleet. For example, to enable your game servers to retrieve data from a
DynamoDB table, use the account that manages that DynamoDB resource.
Identify the following values: (1) The ID of the VPC that you want to peer
with, and (2) the ID of the AWS account that you use to manage Amazon
GameLift. If successful, VPC peering is authorized for the specified VPC.
To request authorization to delete a connection, call this operation from
the AWS account with the VPC that is peered with your Amazon GameLift
fleet. Identify the following values: (1) VPC ID that you want to delete
the peering connection for, and (2) ID of the AWS account that you use to
manage Amazon GameLift.
The authorization remains valid for 24 hours unless it is canceled by a
call to `DeleteVpcPeeringAuthorization`. You must create or delete the
peering connection while the authorization is valid.
<ul> <li> `CreateVpcPeeringAuthorization`
</li> <li> `DescribeVpcPeeringAuthorizations`
</li> <li> `DeleteVpcPeeringAuthorization`
</li> <li> `CreateVpcPeeringConnection`
</li> <li> `DescribeVpcPeeringConnections`
</li> <li> `DeleteVpcPeeringConnection`
</li> </ul>
"""
def create_vpc_peering_authorization(client, input, options \\ []) do
request(client, "CreateVpcPeeringAuthorization", input, options)
end
@doc """
Establishes a VPC peering connection between a virtual private cloud (VPC)
in an AWS account with the VPC for your Amazon GameLift fleet. VPC peering
enables the game servers on your fleet to communicate directly with other
AWS resources. You can peer with VPCs in any AWS account that you have
access to, including the account that you use to manage your Amazon
GameLift fleets. You cannot peer with VPCs that are in different Regions.
For more information, see [VPC Peering with Amazon GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html).
Before calling this operation to establish the peering connection, you
first need to call `CreateVpcPeeringAuthorization` and identify the VPC you
want to peer with. Once the authorization for the specified VPC is issued,
you have 24 hours to establish the connection. These two operations handle
all tasks necessary to peer the two VPCs, including acceptance, updating
routing tables, etc.
To establish the connection, call this operation from the AWS account that
is used to manage the Amazon GameLift fleets. Identify the following
values: (1) The ID of the fleet you want to be enable a VPC peering
connection for; (2) The AWS account with the VPC that you want to peer
with; and (3) The ID of the VPC you want to peer with. This operation is
asynchronous. If successful, a `VpcPeeringConnection` request is created.
You can use continuous polling to track the request's status using
`DescribeVpcPeeringConnections`, or by monitoring fleet events for success
or failure using `DescribeFleetEvents`.
<ul> <li> `CreateVpcPeeringAuthorization`
</li> <li> `DescribeVpcPeeringAuthorizations`
</li> <li> `DeleteVpcPeeringAuthorization`
</li> <li> `CreateVpcPeeringConnection`
</li> <li> `DescribeVpcPeeringConnections`
</li> <li> `DeleteVpcPeeringConnection`
</li> </ul>
"""
def create_vpc_peering_connection(client, input, options \\ []) do
request(client, "CreateVpcPeeringConnection", input, options)
end
@doc """
Deletes an alias. This operation removes all record of the alias. Game
clients attempting to access a server process using the deleted alias
receive an error. To delete an alias, specify the alias ID to be deleted.
<ul> <li> `CreateAlias`
</li> <li> `ListAliases`
</li> <li> `DescribeAlias`
</li> <li> `UpdateAlias`
</li> <li> `DeleteAlias`
</li> <li> `ResolveAlias`
</li> </ul>
"""
def delete_alias(client, input, options \\ []) do
request(client, "DeleteAlias", input, options)
end
@doc """
Deletes a build. This operation permanently deletes the build resource and
any uploaded build files. Deleting a build does not affect the status of
any active fleets using the build, but you can no longer create new fleets
with the deleted build.
To delete a build, specify the build ID.
**Learn more**
[ Upload a Custom Server
Build](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html)
**Related operations**
<ul> <li> `CreateBuild`
</li> <li> `ListBuilds`
</li> <li> `DescribeBuild`
</li> <li> `UpdateBuild`
</li> <li> `DeleteBuild`
</li> </ul>
"""
def delete_build(client, input, options \\ []) do
request(client, "DeleteBuild", input, options)
end
@doc """
Deletes everything related to a fleet. Before deleting a fleet, you must
set the fleet's desired capacity to zero. See `UpdateFleetCapacity`.
If the fleet being deleted has a VPC peering connection, you first need to
get a valid authorization (good for 24 hours) by calling
`CreateVpcPeeringAuthorization`. You do not need to explicitly delete the
VPC peering connection--this is done as part of the delete fleet process.
This operation removes the fleet and its resources. Once a fleet is
deleted, you can no longer use any of the resource in that fleet.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def delete_fleet(client, input, options \\ []) do
request(client, "DeleteFleet", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Terminates a game server group and permanently deletes the game server
group record. You have several options for how these resources are impacted
when deleting the game server group. Depending on the type of delete
operation selected, this operation might affect these resources:
<ul> <li> The game server group
</li> <li> The corresponding Auto Scaling group
</li> <li> All game servers that are currently running in the group
</li> </ul> To delete a game server group, identify the game server group
to delete and specify the type of delete operation to initiate. Game server
groups can only be deleted if they are in `ACTIVE` or `ERROR` status.
If the delete request is successful, a series of operations are kicked off.
The game server group status is changed to `DELETE_SCHEDULED`, which
prevents new game servers from being registered and stops automatic scaling
activity. Once all game servers in the game server group are deregistered,
GameLift FleetIQ can begin deleting resources. If any of the delete
operations fail, the game server group is placed in `ERROR` status.
GameLift FleetIQ emits delete events to Amazon CloudWatch.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def delete_game_server_group(client, input, options \\ []) do
request(client, "DeleteGameServerGroup", input, options)
end
@doc """
Deletes a game session queue. Once a queue is successfully deleted,
unfulfilled `StartGameSessionPlacement` requests that reference the queue
will fail. To delete a queue, specify the queue name.
**Learn more**
[ Using Multi-Region
Queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html)
**Related operations**
<ul> <li> `CreateGameSessionQueue`
</li> <li> `DescribeGameSessionQueues`
</li> <li> `UpdateGameSessionQueue`
</li> <li> `DeleteGameSessionQueue`
</li> </ul>
"""
def delete_game_session_queue(client, input, options \\ []) do
request(client, "DeleteGameSessionQueue", input, options)
end
@doc """
Permanently removes a FlexMatch matchmaking configuration. To delete,
specify the configuration name. A matchmaking configuration cannot be
deleted if it is being used in any active matchmaking tickets.
**Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def delete_matchmaking_configuration(client, input, options \\ []) do
request(client, "DeleteMatchmakingConfiguration", input, options)
end
@doc """
Deletes an existing matchmaking rule set. To delete the rule set, provide
the rule set name. Rule sets cannot be deleted if they are currently being
used by a matchmaking configuration.
**Learn more**
<ul> <li> [Build a Rule
Set](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-rulesets.html)
</li> </ul> **Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def delete_matchmaking_rule_set(client, input, options \\ []) do
request(client, "DeleteMatchmakingRuleSet", input, options)
end
@doc """
Deletes a fleet scaling policy. Once deleted, the policy is no longer in
force and GameLift removes all record of it. To delete a scaling policy,
specify both the scaling policy name and the fleet ID it is associated
with.
To temporarily suspend scaling policies, call `StopFleetActions`. This
operation suspends all policies for the fleet.
<ul> <li> `DescribeFleetCapacity`
</li> <li> `UpdateFleetCapacity`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> Manage scaling policies:
<ul> <li> `PutScalingPolicy` (auto-scaling)
</li> <li> `DescribeScalingPolicies` (auto-scaling)
</li> <li> `DeleteScalingPolicy` (auto-scaling)
</li> </ul> </li> <li> Manage fleet actions:
<ul> <li> `StartFleetActions`
</li> <li> `StopFleetActions`
</li> </ul> </li> </ul>
"""
def delete_scaling_policy(client, input, options \\ []) do
request(client, "DeleteScalingPolicy", input, options)
end
@doc """
Deletes a Realtime script. This operation permanently deletes the script
record. If script files were uploaded, they are also deleted (files stored
in an S3 bucket are not deleted).
To delete a script, specify the script ID. Before deleting a script, be
sure to terminate all fleets that are deployed with the script being
deleted. Fleet instances periodically check for script updates, and if the
script record no longer exists, the instance will go into an error state
and be unable to host game sessions.
**Learn more**
[Amazon GameLift Realtime
Servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/realtime-intro.html)
**Related operations**
<ul> <li> `CreateScript`
</li> <li> `ListScripts`
</li> <li> `DescribeScript`
</li> <li> `UpdateScript`
</li> <li> `DeleteScript`
</li> </ul>
"""
def delete_script(client, input, options \\ []) do
request(client, "DeleteScript", input, options)
end
@doc """
Cancels a pending VPC peering authorization for the specified VPC. If you
need to delete an existing VPC peering connection, call
`DeleteVpcPeeringConnection`.
<ul> <li> `CreateVpcPeeringAuthorization`
</li> <li> `DescribeVpcPeeringAuthorizations`
</li> <li> `DeleteVpcPeeringAuthorization`
</li> <li> `CreateVpcPeeringConnection`
</li> <li> `DescribeVpcPeeringConnections`
</li> <li> `DeleteVpcPeeringConnection`
</li> </ul>
"""
def delete_vpc_peering_authorization(client, input, options \\ []) do
request(client, "DeleteVpcPeeringAuthorization", input, options)
end
@doc """
Removes a VPC peering connection. To delete the connection, you must have a
valid authorization for the VPC peering connection that you want to delete.
You can check for an authorization by calling
`DescribeVpcPeeringAuthorizations` or request a new one using
`CreateVpcPeeringAuthorization`.
Once a valid authorization exists, call this operation from the AWS account
that is used to manage the Amazon GameLift fleets. Identify the connection
to delete by the connection ID and fleet ID. If successful, the connection
is removed.
<ul> <li> `CreateVpcPeeringAuthorization`
</li> <li> `DescribeVpcPeeringAuthorizations`
</li> <li> `DeleteVpcPeeringAuthorization`
</li> <li> `CreateVpcPeeringConnection`
</li> <li> `DescribeVpcPeeringConnections`
</li> <li> `DeleteVpcPeeringConnection`
</li> </ul>
"""
def delete_vpc_peering_connection(client, input, options \\ []) do
request(client, "DeleteVpcPeeringConnection", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Removes the game server from a game server group. As a result of this
operation, the deregistered game server can no longer be claimed and will
not be returned in a list of active game servers.
To deregister a game server, specify the game server group and game server
ID. If successful, this operation emits a CloudWatch event with termination
timestamp and reason.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `RegisterGameServer`
</li> <li> `ListGameServers`
</li> <li> `ClaimGameServer`
</li> <li> `DescribeGameServer`
</li> <li> `UpdateGameServer`
</li> <li> `DeregisterGameServer`
</li> </ul>
"""
def deregister_game_server(client, input, options \\ []) do
request(client, "DeregisterGameServer", input, options)
end
@doc """
Retrieves properties for an alias. This operation returns all alias
metadata and settings. To get an alias's target fleet ID only, use
`ResolveAlias`.
To get alias properties, specify the alias ID. If successful, the requested
alias record is returned.
<ul> <li> `CreateAlias`
</li> <li> `ListAliases`
</li> <li> `DescribeAlias`
</li> <li> `UpdateAlias`
</li> <li> `DeleteAlias`
</li> <li> `ResolveAlias`
</li> </ul>
"""
def describe_alias(client, input, options \\ []) do
request(client, "DescribeAlias", input, options)
end
@doc """
Retrieves properties for a custom game build. To request a build resource,
specify a build ID. If successful, an object containing the build
properties is returned.
**Learn more**
[ Upload a Custom Server
Build](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html)
**Related operations**
<ul> <li> `CreateBuild`
</li> <li> `ListBuilds`
</li> <li> `DescribeBuild`
</li> <li> `UpdateBuild`
</li> <li> `DeleteBuild`
</li> </ul>
"""
def describe_build(client, input, options \\ []) do
request(client, "DescribeBuild", input, options)
end
@doc """
Retrieves the following information for the specified EC2 instance type:
<ul> <li> Maximum number of instances allowed per AWS account (service
limit).
</li> <li> Current usage for the AWS account.
</li> </ul> To learn more about the capabilities of each instance type, see
[Amazon EC2 Instance Types](http://aws.amazon.com/ec2/instance-types/).
Note that the instance types offered may vary depending on the region.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def describe_e_c2_instance_limits(client, input, options \\ []) do
request(client, "DescribeEC2InstanceLimits", input, options)
end
@doc """
Retrieves core properties, including configuration, status, and metadata,
for a fleet.
To get attributes for one or more fleets, provide a list of fleet IDs or
fleet ARNs. To get attributes for all fleets, do not specify a fleet
identifier. When requesting attributes for multiple fleets, use the
pagination parameters to retrieve results as a set of sequential pages. If
successful, a `FleetAttributes` object is returned for each fleet
requested, unless the fleet identifier is not found.
<note> Some API operations may limit the number of fleet IDs allowed in one
request. If a request exceeds this limit, the request fails and the error
message includes the maximum allowed number.
</note> **Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> Describe fleets:
<ul> <li> `DescribeFleetAttributes`
</li> <li> `DescribeFleetCapacity`
</li> <li> `DescribeFleetPortSettings`
</li> <li> `DescribeFleetUtilization`
</li> <li> `DescribeRuntimeConfiguration`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> `DescribeFleetEvents`
</li> </ul> </li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def describe_fleet_attributes(client, input, options \\ []) do
request(client, "DescribeFleetAttributes", input, options)
end
@doc """
Retrieves the current capacity statistics for one or more fleets. These
statistics present a snapshot of the fleet's instances and provide insight
on current or imminent scaling activity. To get statistics on game hosting
activity in the fleet, see `DescribeFleetUtilization`.
You can request capacity for all fleets or specify a list of one or more
fleet identifiers. When requesting multiple fleets, use the pagination
parameters to retrieve results as a set of sequential pages. If successful,
a `FleetCapacity` object is returned for each requested fleet ID. When a
list of fleet IDs is provided, attribute objects are returned only for
fleets that currently exist.
<note> Some API operations may limit the number of fleet IDs allowed in one
request. If a request exceeds this limit, the request fails and the error
message includes the maximum allowed.
</note> **Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
[GameLift Metrics for
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html#gamelift-metrics-fleet)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> Describe fleets:
<ul> <li> `DescribeFleetAttributes`
</li> <li> `DescribeFleetCapacity`
</li> <li> `DescribeFleetPortSettings`
</li> <li> `DescribeFleetUtilization`
</li> <li> `DescribeRuntimeConfiguration`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> `DescribeFleetEvents`
</li> </ul> </li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def describe_fleet_capacity(client, input, options \\ []) do
request(client, "DescribeFleetCapacity", input, options)
end
@doc """
Retrieves entries from the specified fleet's event log. You can specify a
time range to limit the result set. Use the pagination parameters to
retrieve results as a set of sequential pages. If successful, a collection
of event log entries matching the request are returned.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> Describe fleets:
<ul> <li> `DescribeFleetAttributes`
</li> <li> `DescribeFleetCapacity`
</li> <li> `DescribeFleetPortSettings`
</li> <li> `DescribeFleetUtilization`
</li> <li> `DescribeRuntimeConfiguration`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> `DescribeFleetEvents`
</li> </ul> </li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def describe_fleet_events(client, input, options \\ []) do
request(client, "DescribeFleetEvents", input, options)
end
@doc """
Retrieves a fleet's inbound connection permissions. Connection permissions
specify the range of IP addresses and port settings that incoming traffic
can use to access server processes in the fleet. Game sessions that are
running on instances in the fleet use connections that fall in this range.
To get a fleet's inbound connection permissions, specify the fleet's unique
identifier. If successful, a collection of `IpPermission` objects is
returned for the requested fleet ID. If the requested fleet has been
deleted, the result set is empty.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> Describe fleets:
<ul> <li> `DescribeFleetAttributes`
</li> <li> `DescribeFleetCapacity`
</li> <li> `DescribeFleetPortSettings`
</li> <li> `DescribeFleetUtilization`
</li> <li> `DescribeRuntimeConfiguration`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> `DescribeFleetEvents`
</li> </ul> </li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def describe_fleet_port_settings(client, input, options \\ []) do
request(client, "DescribeFleetPortSettings", input, options)
end
@doc """
Retrieves utilization statistics for one or more fleets. These statistics
provide insight into how available hosting resources are currently being
used. To get statistics on available hosting resources, see
`DescribeFleetCapacity`.
You can request utilization data for all fleets, or specify a list of one
or more fleet IDs. When requesting multiple fleets, use the pagination
parameters to retrieve results as a set of sequential pages. If successful,
a `FleetUtilization` object is returned for each requested fleet ID, unless
the fleet identifier is not found.
<note> Some API operations may limit the number of fleet IDs allowed in one
request. If a request exceeds this limit, the request fails and the error
message includes the maximum allowed.
</note> **Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
[GameLift Metrics for
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html#gamelift-metrics-fleet)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> Describe fleets:
<ul> <li> `DescribeFleetAttributes`
</li> <li> `DescribeFleetCapacity`
</li> <li> `DescribeFleetPortSettings`
</li> <li> `DescribeFleetUtilization`
</li> <li> `DescribeRuntimeConfiguration`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> `DescribeFleetEvents`
</li> </ul> </li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def describe_fleet_utilization(client, input, options \\ []) do
request(client, "DescribeFleetUtilization", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Retrieves information for a registered game server. Information includes
game server status, health check info, and the instance that the game
server is running on.
To retrieve game server information, specify the game server ID. If
successful, the requested game server object is returned.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `RegisterGameServer`
</li> <li> `ListGameServers`
</li> <li> `ClaimGameServer`
</li> <li> `DescribeGameServer`
</li> <li> `UpdateGameServer`
</li> <li> `DeregisterGameServer`
</li> </ul>
"""
def describe_game_server(client, input, options \\ []) do
request(client, "DescribeGameServer", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Retrieves information on a game server group. This operation returns only
properties related to GameLift FleetIQ. To view or update properties for
the corresponding Auto Scaling group, such as launch template, auto scaling
policies, and maximum/minimum group size, access the Auto Scaling group
directly.
To get attributes for a game server group, provide a group name or ARN
value. If successful, a `GameServerGroup` object is returned.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def describe_game_server_group(client, input, options \\ []) do
request(client, "DescribeGameServerGroup", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Retrieves status information about the Amazon EC2 instances associated with
a GameLift FleetIQ game server group. Use this operation to detect when
instances are active or not available to host new game servers. If you are
looking for instance configuration information, call
`DescribeGameServerGroup` or access the corresponding Auto Scaling group
properties.
To request status for all instances in the game server group, provide a
game server group ID only. To request status for specific instances,
provide the game server group ID and one or more instance IDs. Use the
pagination parameters to retrieve results in sequential segments. If
successful, a collection of `GameServerInstance` objects is returned.
This operation is not designed to be called with every game server claim
request; this practice can cause you to exceed your API limit, which
results in errors. Instead, as a best practice, cache the results and
refresh your cache no more than once every 10 seconds.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def describe_game_server_instances(client, input, options \\ []) do
request(client, "DescribeGameServerInstances", input, options)
end
@doc """
Retrieves properties, including the protection policy in force, for one or
more game sessions. This operation can be used in several ways: (1) provide
a `GameSessionId` or `GameSessionArn` to request details for a specific
game session; (2) provide either a `FleetId` or an `AliasId` to request
properties for all game sessions running on a fleet.
To get game session record(s), specify just one of the following: game
session ID, fleet ID, or alias ID. You can filter this request by game
session status. Use the pagination parameters to retrieve results as a set
of sequential pages. If successful, a `GameSessionDetail` object is
returned for each session matching the request.
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def describe_game_session_details(client, input, options \\ []) do
request(client, "DescribeGameSessionDetails", input, options)
end
@doc """
Retrieves properties and current status of a game session placement
request. To get game session placement details, specify the placement ID.
If successful, a `GameSessionPlacement` object is returned.
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def describe_game_session_placement(client, input, options \\ []) do
request(client, "DescribeGameSessionPlacement", input, options)
end
@doc """
Retrieves the properties for one or more game session queues. When
requesting multiple queues, use the pagination parameters to retrieve
results as a set of sequential pages. If successful, a `GameSessionQueue`
object is returned for each requested queue. When specifying a list of
queues, objects are returned only for queues that currently exist in the
Region.
**Learn more**
[ View Your
Queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-console.html)
**Related operations**
<ul> <li> `CreateGameSessionQueue`
</li> <li> `DescribeGameSessionQueues`
</li> <li> `UpdateGameSessionQueue`
</li> <li> `DeleteGameSessionQueue`
</li> </ul>
"""
def describe_game_session_queues(client, input, options \\ []) do
request(client, "DescribeGameSessionQueues", input, options)
end
@doc """
Retrieves a set of one or more game sessions. Request a specific game
session or request all game sessions on a fleet. Alternatively, use
`SearchGameSessions` to request a set of active game sessions that are
filtered by certain criteria. To retrieve protection policy settings for
game sessions, use `DescribeGameSessionDetails`.
To get game sessions, specify one of the following: game session ID, fleet
ID, or alias ID. You can filter this request by game session status. Use
the pagination parameters to retrieve results as a set of sequential pages.
If successful, a `GameSession` object is returned for each game session
matching the request.
*Available in Amazon GameLift Local.*
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def describe_game_sessions(client, input, options \\ []) do
request(client, "DescribeGameSessions", input, options)
end
@doc """
Retrieves information about a fleet's instances, including instance IDs.
Use this operation to get details on all instances in the fleet or get
details on one specific instance.
To get a specific instance, specify fleet ID and instance ID. To get all
instances in a fleet, specify a fleet ID only. Use the pagination
parameters to retrieve results as a set of sequential pages. If successful,
an `Instance` object is returned for each result.
**Learn more**
[Remotely Access Fleet
Instances](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-remote-access.html)
[Debug Fleet
Issues](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-creating-debug.html)
**Related operations**
<ul> <li> `DescribeInstances`
</li> <li> `GetInstanceAccess`
</li> </ul>
"""
def describe_instances(client, input, options \\ []) do
request(client, "DescribeInstances", input, options)
end
@doc """
Retrieves one or more matchmaking tickets. Use this operation to retrieve
ticket information, including--after a successful match is made--connection
information for the resulting new game session.
To request matchmaking tickets, provide a list of up to 10 ticket IDs. If
the request is successful, a ticket object is returned for each requested
ID that currently exists.
This operation is not designed to be continually called to track
matchmaking ticket status. This practice can cause you to exceed your API
limit, which results in errors. Instead, as a best practice, set up an
Amazon Simple Notification Service (SNS) to receive notifications, and
provide the topic ARN in the matchmaking configuration. Continuously poling
ticket status with `DescribeMatchmaking` should only be used for games in
development with low matchmaking usage.
<p/> **Learn more**
[ Add FlexMatch to a Game
Client](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-client.html)
[ Set Up FlexMatch Event
Notification](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-notification.html)
**Related operations**
<ul> <li> `StartMatchmaking`
</li> <li> `DescribeMatchmaking`
</li> <li> `StopMatchmaking`
</li> <li> `AcceptMatch`
</li> <li> `StartMatchBackfill`
</li> </ul>
"""
def describe_matchmaking(client, input, options \\ []) do
request(client, "DescribeMatchmaking", input, options)
end
@doc """
Retrieves the details of FlexMatch matchmaking configurations.
This operation offers the following options: (1) retrieve all matchmaking
configurations, (2) retrieve configurations for a specified list, or (3)
retrieve all configurations that use a specified rule set name. When
requesting multiple items, use the pagination parameters to retrieve
results as a set of sequential pages.
If successful, a configuration is returned for each requested name. When
specifying a list of names, only configurations that currently exist are
returned.
**Learn more**
[ Setting Up FlexMatch
Matchmakers](https://docs.aws.amazon.com/gamelift/latest/developerguide/matchmaker-build.html)
**Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def describe_matchmaking_configurations(client, input, options \\ []) do
request(client, "DescribeMatchmakingConfigurations", input, options)
end
@doc """
Retrieves the details for FlexMatch matchmaking rule sets. You can request
all existing rule sets for the Region, or provide a list of one or more
rule set names. When requesting multiple items, use the pagination
parameters to retrieve results as a set of sequential pages. If successful,
a rule set is returned for each requested name.
**Learn more**
<ul> <li> [Build a Rule
Set](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-rulesets.html)
</li> </ul> **Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def describe_matchmaking_rule_sets(client, input, options \\ []) do
request(client, "DescribeMatchmakingRuleSets", input, options)
end
@doc """
Retrieves properties for one or more player sessions. This operation can be
used in several ways: (1) provide a `PlayerSessionId` to request properties
for a specific player session; (2) provide a `GameSessionId` to request
properties for all player sessions in the specified game session; (3)
provide a `PlayerId` to request properties for all player sessions of a
specified player.
To get game session record(s), specify only one of the following: a player
session ID, a game session ID, or a player ID. You can filter this request
by player session status. Use the pagination parameters to retrieve results
as a set of sequential pages. If successful, a `PlayerSession` object is
returned for each session matching the request.
*Available in Amazon GameLift Local.*
<ul> <li> `CreatePlayerSession`
</li> <li> `CreatePlayerSessions`
</li> <li> `DescribePlayerSessions`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def describe_player_sessions(client, input, options \\ []) do
request(client, "DescribePlayerSessions", input, options)
end
@doc """
Retrieves a fleet's runtime configuration settings. The runtime
configuration tells Amazon GameLift which server processes to run (and how)
on each instance in the fleet.
To get a runtime configuration, specify the fleet's unique identifier. If
successful, a `RuntimeConfiguration` object is returned for the requested
fleet. If the requested fleet has been deleted, the result set is empty.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
[Running Multiple Processes on a
Fleet](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-multiprocess.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> Describe fleets:
<ul> <li> `DescribeFleetAttributes`
</li> <li> `DescribeFleetCapacity`
</li> <li> `DescribeFleetPortSettings`
</li> <li> `DescribeFleetUtilization`
</li> <li> `DescribeRuntimeConfiguration`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> `DescribeFleetEvents`
</li> </ul> </li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def describe_runtime_configuration(client, input, options \\ []) do
request(client, "DescribeRuntimeConfiguration", input, options)
end
@doc """
Retrieves all scaling policies applied to a fleet.
To get a fleet's scaling policies, specify the fleet ID. You can filter
this request by policy status, such as to retrieve only active scaling
policies. Use the pagination parameters to retrieve results as a set of
sequential pages. If successful, set of `ScalingPolicy` objects is returned
for the fleet.
A fleet may have all of its scaling policies suspended
(`StopFleetActions`). This operation does not affect the status of the
scaling policies, which remains ACTIVE. To see whether a fleet's scaling
policies are in force or suspended, call `DescribeFleetAttributes` and
check the stopped actions.
<ul> <li> `DescribeFleetCapacity`
</li> <li> `UpdateFleetCapacity`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> Manage scaling policies:
<ul> <li> `PutScalingPolicy` (auto-scaling)
</li> <li> `DescribeScalingPolicies` (auto-scaling)
</li> <li> `DeleteScalingPolicy` (auto-scaling)
</li> </ul> </li> <li> Manage fleet actions:
<ul> <li> `StartFleetActions`
</li> <li> `StopFleetActions`
</li> </ul> </li> </ul>
"""
def describe_scaling_policies(client, input, options \\ []) do
request(client, "DescribeScalingPolicies", input, options)
end
@doc """
Retrieves properties for a Realtime script.
To request a script record, specify the script ID. If successful, an object
containing the script properties is returned.
**Learn more**
[Amazon GameLift Realtime
Servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/realtime-intro.html)
**Related operations**
<ul> <li> `CreateScript`
</li> <li> `ListScripts`
</li> <li> `DescribeScript`
</li> <li> `UpdateScript`
</li> <li> `DeleteScript`
</li> </ul>
"""
def describe_script(client, input, options \\ []) do
request(client, "DescribeScript", input, options)
end
@doc """
Retrieves valid VPC peering authorizations that are pending for the AWS
account. This operation returns all VPC peering authorizations and requests
for peering. This includes those initiated and received by this account.
<ul> <li> `CreateVpcPeeringAuthorization`
</li> <li> `DescribeVpcPeeringAuthorizations`
</li> <li> `DeleteVpcPeeringAuthorization`
</li> <li> `CreateVpcPeeringConnection`
</li> <li> `DescribeVpcPeeringConnections`
</li> <li> `DeleteVpcPeeringConnection`
</li> </ul>
"""
def describe_vpc_peering_authorizations(client, input, options \\ []) do
request(client, "DescribeVpcPeeringAuthorizations", input, options)
end
@doc """
Retrieves information on VPC peering connections. Use this operation to get
peering information for all fleets or for one specific fleet ID.
To retrieve connection information, call this operation from the AWS
account that is used to manage the Amazon GameLift fleets. Specify a fleet
ID or leave the parameter empty to retrieve all connection records. If
successful, the retrieved information includes both active and pending
connections. Active connections identify the IpV4 CIDR block that the VPC
uses to connect.
<ul> <li> `CreateVpcPeeringAuthorization`
</li> <li> `DescribeVpcPeeringAuthorizations`
</li> <li> `DeleteVpcPeeringAuthorization`
</li> <li> `CreateVpcPeeringConnection`
</li> <li> `DescribeVpcPeeringConnections`
</li> <li> `DeleteVpcPeeringConnection`
</li> </ul>
"""
def describe_vpc_peering_connections(client, input, options \\ []) do
request(client, "DescribeVpcPeeringConnections", input, options)
end
@doc """
Retrieves the location of stored game session logs for a specified game
session. When a game session is terminated, Amazon GameLift automatically
stores the logs in Amazon S3 and retains them for 14 days. Use this URL to
download the logs.
<note> See the [AWS Service
Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_gamelift)
page for maximum log file sizes. Log files that exceed this limit are not
saved.
</note> <ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def get_game_session_log_url(client, input, options \\ []) do
request(client, "GetGameSessionLogUrl", input, options)
end
@doc """
Requests remote access to a fleet instance. Remote access is useful for
debugging, gathering benchmarking data, or observing activity in real time.
To remotely access an instance, you need credentials that match the
operating system of the instance. For a Windows instance, Amazon GameLift
returns a user name and password as strings for use with a Windows Remote
Desktop client. For a Linux instance, Amazon GameLift returns a user name
and RSA private key, also as strings, for use with an SSH client. The
private key must be saved in the proper format to a `.pem` file before
using. If you're making this request using the AWS CLI, saving the secret
can be handled as part of the GetInstanceAccess request, as shown in one of
the examples for this operation.
To request access to a specific instance, specify the IDs of both the
instance and the fleet it belongs to. You can retrieve a fleet's instance
IDs by calling `DescribeInstances`. If successful, an `InstanceAccess`
object is returned that contains the instance's IP address and a set of
credentials.
**Learn more**
[Remotely Access Fleet
Instances](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-remote-access.html)
[Debug Fleet
Issues](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-creating-debug.html)
**Related operations**
<ul> <li> `DescribeInstances`
</li> <li> `GetInstanceAccess`
</li> </ul>
"""
def get_instance_access(client, input, options \\ []) do
request(client, "GetInstanceAccess", input, options)
end
@doc """
Retrieves all aliases for this AWS account. You can filter the result set
by alias name and/or routing strategy type. Use the pagination parameters
to retrieve results in sequential pages.
<note> Returned aliases are not listed in any particular order.
</note> <ul> <li> `CreateAlias`
</li> <li> `ListAliases`
</li> <li> `DescribeAlias`
</li> <li> `UpdateAlias`
</li> <li> `DeleteAlias`
</li> <li> `ResolveAlias`
</li> </ul>
"""
def list_aliases(client, input, options \\ []) do
request(client, "ListAliases", input, options)
end
@doc """
Retrieves build resources for all builds associated with the AWS account in
use. You can limit results to builds that are in a specific status by using
the `Status` parameter. Use the pagination parameters to retrieve results
in a set of sequential pages.
<note> Build resources are not listed in any particular order.
</note> **Learn more**
[ Upload a Custom Server
Build](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html)
**Related operations**
<ul> <li> `CreateBuild`
</li> <li> `ListBuilds`
</li> <li> `DescribeBuild`
</li> <li> `UpdateBuild`
</li> <li> `DeleteBuild`
</li> </ul>
"""
def list_builds(client, input, options \\ []) do
request(client, "ListBuilds", input, options)
end
@doc """
Retrieves a collection of fleet resources for this AWS account. You can
filter the result set to find only those fleets that are deployed with a
specific build or script. Use the pagination parameters to retrieve results
in sequential pages.
<note> Fleet resources are not listed in a particular order.
</note> **Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def list_fleets(client, input, options \\ []) do
request(client, "ListFleets", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Retrieves information on all game servers groups that exist in the current
AWS account for the selected Region. Use the pagination parameters to
retrieve results in a set of sequential segments.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def list_game_server_groups(client, input, options \\ []) do
request(client, "ListGameServerGroups", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Retrieves information on all game servers that are currently active in a
specified game server group. You can opt to sort the list by game server
age. Use the pagination parameters to retrieve results in a set of
sequential segments.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `RegisterGameServer`
</li> <li> `ListGameServers`
</li> <li> `ClaimGameServer`
</li> <li> `DescribeGameServer`
</li> <li> `UpdateGameServer`
</li> <li> `DeregisterGameServer`
</li> </ul>
"""
def list_game_servers(client, input, options \\ []) do
request(client, "ListGameServers", input, options)
end
@doc """
Retrieves script records for all Realtime scripts that are associated with
the AWS account in use.
**Learn more**
[Amazon GameLift Realtime
Servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/realtime-intro.html)
**Related operations**
<ul> <li> `CreateScript`
</li> <li> `ListScripts`
</li> <li> `DescribeScript`
</li> <li> `UpdateScript`
</li> <li> `DeleteScript`
</li> </ul>
"""
def list_scripts(client, input, options \\ []) do
request(client, "ListScripts", input, options)
end
@doc """
Retrieves all tags that are assigned to a GameLift resource. Resource tags
are used to organize AWS resources for a range of purposes. This operation
handles the permissions necessary to manage tags for the following GameLift
resource types:
<ul> <li> Build
</li> <li> Script
</li> <li> Fleet
</li> <li> Alias
</li> <li> GameSessionQueue
</li> <li> MatchmakingConfiguration
</li> <li> MatchmakingRuleSet
</li> </ul> To list tags for a resource, specify the unique ARN value for
the resource.
**Learn more**
[Tagging AWS
Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html)
in the *AWS General Reference*
[ AWS Tagging
Strategies](http://aws.amazon.com/answers/account-management/aws-tagging-strategies/)
**Related operations**
<ul> <li> `TagResource`
</li> <li> `UntagResource`
</li> <li> `ListTagsForResource`
</li> </ul>
"""
def list_tags_for_resource(client, input, options \\ []) do
request(client, "ListTagsForResource", input, options)
end
@doc """
Creates or updates a scaling policy for a fleet. Scaling policies are used
to automatically scale a fleet's hosting capacity to meet player demand. An
active scaling policy instructs Amazon GameLift to track a fleet metric and
automatically change the fleet's capacity when a certain threshold is
reached. There are two types of scaling policies: target-based and
rule-based. Use a target-based policy to quickly and efficiently manage
fleet scaling; this option is the most commonly used. Use rule-based
policies when you need to exert fine-grained control over auto-scaling.
Fleets can have multiple scaling policies of each type in force at the same
time; you can have one target-based policy, one or multiple rule-based
scaling policies, or both. We recommend caution, however, because multiple
auto-scaling policies can have unintended consequences.
You can temporarily suspend all scaling policies for a fleet by calling
`StopFleetActions` with the fleet action AUTO_SCALING. To resume scaling
policies, call `StartFleetActions` with the same fleet action. To stop just
one scaling policy--or to permanently remove it, you must delete the policy
with `DeleteScalingPolicy`.
Learn more about how to work with auto-scaling in [Set Up Fleet Automatic
Scaling](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-autoscaling.html).
**Target-based policy**
A target-based policy tracks a single metric: PercentAvailableGameSessions.
This metric tells us how much of a fleet's hosting capacity is ready to
host game sessions but is not currently in use. This is the fleet's buffer;
it measures the additional player demand that the fleet could handle at
current capacity. With a target-based policy, you set your ideal buffer
size and leave it to Amazon GameLift to take whatever action is needed to
maintain that target.
For example, you might choose to maintain a 10% buffer for a fleet that has
the capacity to host 100 simultaneous game sessions. This policy tells
Amazon GameLift to take action whenever the fleet's available capacity
falls below or rises above 10 game sessions. Amazon GameLift will start new
instances or stop unused instances in order to return to the 10% buffer.
To create or update a target-based policy, specify a fleet ID and name, and
set the policy type to "TargetBased". Specify the metric to track
(PercentAvailableGameSessions) and reference a `TargetConfiguration` object
with your desired buffer value. Exclude all other parameters. On a
successful request, the policy name is returned. The scaling policy is
automatically in force as soon as it's successfully created. If the fleet's
auto-scaling actions are temporarily suspended, the new policy will be in
force once the fleet actions are restarted.
**Rule-based policy**
A rule-based policy tracks specified fleet metric, sets a threshold value,
and specifies the type of action to initiate when triggered. With a
rule-based policy, you can select from several available fleet metrics.
Each policy specifies whether to scale up or scale down (and by how much),
so you need one policy for each type of action.
For example, a policy may make the following statement: "If the percentage
of idle instances is greater than 20% for more than 15 minutes, then reduce
the fleet capacity by 10%."
A policy's rule statement has the following structure:
If `[MetricName]` is `[ComparisonOperator]` `[Threshold]` for
`[EvaluationPeriods]` minutes, then `[ScalingAdjustmentType]` to/by
`[ScalingAdjustment]`.
To implement the example, the rule statement would look like this:
If `[PercentIdleInstances]` is `[GreaterThanThreshold]` `[20]` for `[15]`
minutes, then `[PercentChangeInCapacity]` to/by `[10]`.
To create or update a scaling policy, specify a unique combination of name
and fleet ID, and set the policy type to "RuleBased". Specify the parameter
values for a policy rule statement. On a successful request, the policy
name is returned. Scaling policies are automatically in force as soon as
they're successfully created. If the fleet's auto-scaling actions are
temporarily suspended, the new policy will be in force once the fleet
actions are restarted.
<ul> <li> `DescribeFleetCapacity`
</li> <li> `UpdateFleetCapacity`
</li> <li> `DescribeEC2InstanceLimits`
</li> <li> Manage scaling policies:
<ul> <li> `PutScalingPolicy` (auto-scaling)
</li> <li> `DescribeScalingPolicies` (auto-scaling)
</li> <li> `DeleteScalingPolicy` (auto-scaling)
</li> </ul> </li> <li> Manage fleet actions:
<ul> <li> `StartFleetActions`
</li> <li> `StopFleetActions`
</li> </ul> </li> </ul>
"""
def put_scaling_policy(client, input, options \\ []) do
request(client, "PutScalingPolicy", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Creates a new game server resource and notifies GameLift FleetIQ that the
game server is ready to host gameplay and players. This operation is called
by a game server process that is running on an instance in a game server
group. Registering game servers enables GameLift FleetIQ to track available
game servers and enables game clients and services to claim a game server
for a new game session.
To register a game server, identify the game server group and instance
where the game server is running, and provide a unique identifier for the
game server. You can also include connection and game server data. When a
game client or service requests a game server by calling `ClaimGameServer`,
this information is returned in the response.
Once a game server is successfully registered, it is put in status
`AVAILABLE`. A request to register a game server may fail if the instance
it is running on is in the process of shutting down as part of instance
balancing or scale-down activity.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `RegisterGameServer`
</li> <li> `ListGameServers`
</li> <li> `ClaimGameServer`
</li> <li> `DescribeGameServer`
</li> <li> `UpdateGameServer`
</li> <li> `DeregisterGameServer`
</li> </ul>
"""
def register_game_server(client, input, options \\ []) do
request(client, "RegisterGameServer", input, options)
end
@doc """
Retrieves a fresh set of credentials for use when uploading a new set of
game build files to Amazon GameLift's Amazon S3. This is done as part of
the build creation process; see `CreateBuild`.
To request new credentials, specify the build ID as returned with an
initial `CreateBuild` request. If successful, a new set of credentials are
returned, along with the S3 storage location associated with the build ID.
**Learn more**
[ Create a Build with Files in
S3](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-cli-uploading.html#gamelift-build-cli-uploading-create-build)
**Related operations**
<ul> <li> `CreateBuild`
</li> <li> `ListBuilds`
</li> <li> `DescribeBuild`
</li> <li> `UpdateBuild`
</li> <li> `DeleteBuild`
</li> </ul>
"""
def request_upload_credentials(client, input, options \\ []) do
request(client, "RequestUploadCredentials", input, options)
end
@doc """
Retrieves the fleet ID that an alias is currently pointing to.
<ul> <li> `CreateAlias`
</li> <li> `ListAliases`
</li> <li> `DescribeAlias`
</li> <li> `UpdateAlias`
</li> <li> `DeleteAlias`
</li> <li> `ResolveAlias`
</li> </ul>
"""
def resolve_alias(client, input, options \\ []) do
request(client, "ResolveAlias", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Reinstates activity on a game server group after it has been suspended. A
game server group might be suspended by the`SuspendGameServerGroup`
operation, or it might be suspended involuntarily due to a configuration
problem. In the second case, you can manually resume activity on the group
once the configuration problem has been resolved. Refer to the game server
group status and status reason for more information on why group activity
is suspended.
To resume activity, specify a game server group ARN and the type of
activity to be resumed. If successful, a `GameServerGroup` object is
returned showing that the resumed activity is no longer listed in
`SuspendedActions`.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def resume_game_server_group(client, input, options \\ []) do
request(client, "ResumeGameServerGroup", input, options)
end
@doc """
Retrieves all active game sessions that match a set of search criteria and
sorts them in a specified order. You can search or sort by the following
game session attributes:
<ul> <li> **gameSessionId** -- A unique identifier for the game session.
You can use either a `GameSessionId` or `GameSessionArn` value.
</li> <li> **gameSessionName** -- Name assigned to a game session. This
value is set when requesting a new game session with `CreateGameSession` or
updating with `UpdateGameSession`. Game session names do not need to be
unique to a game session.
</li> <li> **gameSessionProperties** -- Custom data defined in a game
session's `GameProperty` parameter. `GameProperty` values are stored as
key:value pairs; the filter expression must indicate the key and a string
to search the data values for. For example, to search for game sessions
with custom data containing the key:value pair "gameMode:brawl", specify
the following: `gameSessionProperties.gameMode = "brawl"`. All custom data
values are searched as strings.
</li> <li> **maximumSessions** -- Maximum number of player sessions allowed
for a game session. This value is set when requesting a new game session
with `CreateGameSession` or updating with `UpdateGameSession`.
</li> <li> **creationTimeMillis** -- Value indicating when a game session
was created. It is expressed in Unix time as milliseconds.
</li> <li> **playerSessionCount** -- Number of players currently connected
to a game session. This value changes rapidly as players join the session
or drop out.
</li> <li> **hasAvailablePlayerSessions** -- Boolean value indicating
whether a game session has reached its maximum number of players. It is
highly recommended that all search requests include this filter attribute
to optimize search performance and return only sessions that players can
join.
</li> </ul> <note> Returned values for `playerSessionCount` and
`hasAvailablePlayerSessions` change quickly as players join sessions and
others drop out. Results should be considered a snapshot in time. Be sure
to refresh search results often, and handle sessions that fill up before a
player can join.
</note> To search or sort, specify either a fleet ID or an alias ID, and
provide a search filter expression, a sort expression, or both. If
successful, a collection of `GameSession` objects matching the request is
returned. Use the pagination parameters to retrieve results as a set of
sequential pages.
You can search for game sessions one fleet at a time only. To find game
sessions across multiple fleets, you must search each fleet separately and
combine the results. This search feature finds only game sessions that are
in `ACTIVE` status. To locate games in statuses other than active, use
`DescribeGameSessionDetails`.
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def search_game_sessions(client, input, options \\ []) do
request(client, "SearchGameSessions", input, options)
end
@doc """
Resumes activity on a fleet that was suspended with `StopFleetActions`.
Currently, this operation is used to restart a fleet's auto-scaling
activity.
To start fleet actions, specify the fleet ID and the type of actions to
restart. When auto-scaling fleet actions are restarted, Amazon GameLift
once again initiates scaling events as triggered by the fleet's scaling
policies. If actions on the fleet were never stopped, this operation will
have no effect. You can view a fleet's stopped actions using
`DescribeFleetAttributes`.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def start_fleet_actions(client, input, options \\ []) do
request(client, "StartFleetActions", input, options)
end
@doc """
Places a request for a new game session in a queue (see
`CreateGameSessionQueue`). When processing a placement request, Amazon
GameLift searches for available resources on the queue's destinations,
scanning each until it finds resources or the placement request times out.
A game session placement request can also request player sessions. When a
new game session is successfully created, Amazon GameLift creates a player
session for each player included in the request.
When placing a game session, by default Amazon GameLift tries each fleet in
the order they are listed in the queue configuration. Ideally, a queue's
destinations are listed in preference order.
Alternatively, when requesting a game session with players, you can also
provide latency data for each player in relevant Regions. Latency data
indicates the performance lag a player experiences when connected to a
fleet in the Region. Amazon GameLift uses latency data to reorder the list
of destinations to place the game session in a Region with minimal lag. If
latency data is provided for multiple players, Amazon GameLift calculates
each Region's average lag for all players and reorders to get the best game
play across all players.
To place a new game session request, specify the following:
<ul> <li> The queue name and a set of game session properties and settings
</li> <li> A unique ID (such as a UUID) for the placement. You use this ID
to track the status of the placement request
</li> <li> (Optional) A set of player data and a unique player ID for each
player that you are joining to the new game session (player data is
optional, but if you include it, you must also provide a unique ID for each
player)
</li> <li> Latency data for all players (if you want to optimize game play
for the players)
</li> </ul> If successful, a new game session placement is created.
To track the status of a placement request, call
`DescribeGameSessionPlacement` and check the request's status. If the
status is `FULFILLED`, a new game session has been created and a game
session ARN and Region are referenced. If the placement request times out,
you can resubmit the request or retry it with a different queue.
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def start_game_session_placement(client, input, options \\ []) do
request(client, "StartGameSessionPlacement", input, options)
end
@doc """
Finds new players to fill open slots in an existing game session. This
operation can be used to add players to matched games that start with fewer
than the maximum number of players or to replace players when they drop
out. By backfilling with the same matchmaker used to create the original
match, you ensure that new players meet the match criteria and maintain a
consistent experience throughout the game session. You can backfill a match
anytime after a game session has been created.
To request a match backfill, specify a unique ticket ID, the existing game
session's ARN, a matchmaking configuration, and a set of data that
describes all current players in the game session. If successful, a match
backfill ticket is created and returned with status set to QUEUED. The
ticket is placed in the matchmaker's ticket pool and processed. Track the
status of the ticket to respond as needed.
The process of finding backfill matches is essentially identical to the
initial matchmaking process. The matchmaker searches the pool and groups
tickets together to form potential matches, allowing only one backfill
ticket per potential match. Once the a match is formed, the matchmaker
creates player sessions for the new players. All tickets in the match are
updated with the game session's connection information, and the
`GameSession` object is updated to include matchmaker data on the new
players. For more detail on how match backfill requests are processed, see
[ How Amazon GameLift FlexMatch
Works](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-match.html).
**Learn more**
[ Backfill Existing Games with
FlexMatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-backfill.html)
[ How GameLift FlexMatch
Works](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-match.html)
**Related operations**
<ul> <li> `StartMatchmaking`
</li> <li> `DescribeMatchmaking`
</li> <li> `StopMatchmaking`
</li> <li> `AcceptMatch`
</li> <li> `StartMatchBackfill`
</li> </ul>
"""
def start_match_backfill(client, input, options \\ []) do
request(client, "StartMatchBackfill", input, options)
end
@doc """
Uses FlexMatch to create a game match for a group of players based on
custom matchmaking rules, and starts a new game for the matched players.
Each matchmaking request specifies the type of match to build (team
configuration, rules for an acceptable match, etc.). The request also
specifies the players to find a match for and where to host the new game
session for optimal performance. A matchmaking request might start with a
single player or a group of players who want to play together. FlexMatch
finds additional players as needed to fill the match. Match type, rules,
and the queue used to place a new game session are defined in a
`MatchmakingConfiguration`.
To start matchmaking, provide a unique ticket ID, specify a matchmaking
configuration, and include the players to be matched. You must also include
a set of player attributes relevant for the matchmaking configuration. If
successful, a matchmaking ticket is returned with status set to `QUEUED`.
Track the status of the ticket to respond as needed and acquire game
session connection information for successfully completed matches. Ticket
status updates are tracked using event notification through Amazon Simple
Notification Service (SNS), which is defined in the matchmaking
configuration.
**Processing a matchmaking request** -- FlexMatch handles a matchmaking
request as follows:
<ol> <li> Your client code submits a `StartMatchmaking` request for one or
more players and tracks the status of the request ticket.
</li> <li> FlexMatch uses this ticket and others in process to build an
acceptable match. When a potential match is identified, all tickets in the
proposed match are advanced to the next status.
</li> <li> If the match requires player acceptance (set in the matchmaking
configuration), the tickets move into status `REQUIRES_ACCEPTANCE`. This
status triggers your client code to solicit acceptance from all players in
every ticket involved in the match, and then call `AcceptMatch` for each
player. If any player rejects or fails to accept the match before a
specified timeout, the proposed match is dropped (see `AcceptMatch` for
more details).
</li> <li> Once a match is proposed and accepted, the matchmaking tickets
move into status `PLACING`. FlexMatch locates resources for a new game
session using the game session queue (set in the matchmaking configuration)
and creates the game session based on the match data.
</li> <li> When the match is successfully placed, the matchmaking tickets
move into `COMPLETED` status. Connection information (including game
session endpoint and player session) is added to the matchmaking tickets.
Matched players can use the connection information to join the game.
</li> </ol> **Learn more**
[ Add FlexMatch to a Game
Client](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-client.html)
[ Set Up FlexMatch Event
Notification](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-notification.html)
[ FlexMatch Integration
Roadmap](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-tasks.html)
[ How GameLift FlexMatch
Works](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-match.html)
**Related operations**
<ul> <li> `StartMatchmaking`
</li> <li> `DescribeMatchmaking`
</li> <li> `StopMatchmaking`
</li> <li> `AcceptMatch`
</li> <li> `StartMatchBackfill`
</li> </ul>
"""
def start_matchmaking(client, input, options \\ []) do
request(client, "StartMatchmaking", input, options)
end
@doc """
Suspends activity on a fleet. Currently, this operation is used to stop a
fleet's auto-scaling activity. It is used to temporarily stop triggering
scaling events. The policies can be retained and auto-scaling activity can
be restarted using `StartFleetActions`. You can view a fleet's stopped
actions using `DescribeFleetAttributes`.
To stop fleet actions, specify the fleet ID and the type of actions to
suspend. When auto-scaling fleet actions are stopped, Amazon GameLift no
longer initiates scaling events except in response to manual changes using
`UpdateFleetCapacity`.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> `UpdateFleetAttributes`
</li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def stop_fleet_actions(client, input, options \\ []) do
request(client, "StopFleetActions", input, options)
end
@doc """
Cancels a game session placement that is in `PENDING` status. To stop a
placement, provide the placement ID values. If successful, the placement is
moved to `CANCELLED` status.
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def stop_game_session_placement(client, input, options \\ []) do
request(client, "StopGameSessionPlacement", input, options)
end
@doc """
Cancels a matchmaking ticket or match backfill ticket that is currently
being processed. To stop the matchmaking operation, specify the ticket ID.
If successful, work on the ticket is stopped, and the ticket status is
changed to `CANCELLED`.
This call is also used to turn off automatic backfill for an individual
game session. This is for game sessions that are created with a matchmaking
configuration that has automatic backfill enabled. The ticket ID is
included in the `MatchmakerData` of an updated game session object, which
is provided to the game server.
<note> If the operation is successful, the service sends back an empty JSON
struct with the HTTP 200 response (not an empty HTTP body).
</note> **Learn more**
[ Add FlexMatch to a Game
Client](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-client.html)
**Related operations**
<ul> <li> `StartMatchmaking`
</li> <li> `DescribeMatchmaking`
</li> <li> `StopMatchmaking`
</li> <li> `AcceptMatch`
</li> <li> `StartMatchBackfill`
</li> </ul>
"""
def stop_matchmaking(client, input, options \\ []) do
request(client, "StopMatchmaking", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Temporarily stops activity on a game server group without terminating
instances or the game server group. You can restart activity by calling
`ResumeGameServerGroup`. You can suspend the following activity:
<ul> <li> **Instance type replacement** - This activity evaluates the
current game hosting viability of all Spot instance types that are defined
for the game server group. It updates the Auto Scaling group to remove
nonviable Spot Instance types, which have a higher chance of game server
interruptions. It then balances capacity across the remaining viable Spot
Instance types. When this activity is suspended, the Auto Scaling group
continues with its current balance, regardless of viability. Instance
protection, utilization metrics, and capacity scaling activities continue
to be active.
</li> </ul> To suspend activity, specify a game server group ARN and the
type of activity to be suspended. If successful, a `GameServerGroup` object
is returned showing that the activity is listed in `SuspendedActions`.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def suspend_game_server_group(client, input, options \\ []) do
request(client, "SuspendGameServerGroup", input, options)
end
@doc """
Assigns a tag to a GameLift resource. AWS resource tags provide an
additional management tool set. You can use tags to organize resources,
create IAM permissions policies to manage access to groups of resources,
customize AWS cost breakdowns, etc. This operation handles the permissions
necessary to manage tags for the following GameLift resource types:
<ul> <li> Build
</li> <li> Script
</li> <li> Fleet
</li> <li> Alias
</li> <li> GameSessionQueue
</li> <li> MatchmakingConfiguration
</li> <li> MatchmakingRuleSet
</li> </ul> To add a tag to a resource, specify the unique ARN value for
the resource and provide a tag list containing one or more tags. The
operation succeeds even if the list includes tags that are already assigned
to the specified resource.
**Learn more**
[Tagging AWS
Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html)
in the *AWS General Reference*
[ AWS Tagging
Strategies](http://aws.amazon.com/answers/account-management/aws-tagging-strategies/)
**Related operations**
<ul> <li> `TagResource`
</li> <li> `UntagResource`
</li> <li> `ListTagsForResource`
</li> </ul>
"""
def tag_resource(client, input, options \\ []) do
request(client, "TagResource", input, options)
end
@doc """
Removes a tag that is assigned to a GameLift resource. Resource tags are
used to organize AWS resources for a range of purposes. This operation
handles the permissions necessary to manage tags for the following GameLift
resource types:
<ul> <li> Build
</li> <li> Script
</li> <li> Fleet
</li> <li> Alias
</li> <li> GameSessionQueue
</li> <li> MatchmakingConfiguration
</li> <li> MatchmakingRuleSet
</li> </ul> To remove a tag from a resource, specify the unique ARN value
for the resource and provide a string list containing one or more tags to
be removed. This operation succeeds even if the list includes tags that are
not currently assigned to the specified resource.
**Learn more**
[Tagging AWS
Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html)
in the *AWS General Reference*
[ AWS Tagging
Strategies](http://aws.amazon.com/answers/account-management/aws-tagging-strategies/)
**Related operations**
<ul> <li> `TagResource`
</li> <li> `UntagResource`
</li> <li> `ListTagsForResource`
</li> </ul>
"""
def untag_resource(client, input, options \\ []) do
request(client, "UntagResource", input, options)
end
@doc """
Updates properties for an alias. To update properties, specify the alias ID
to be updated and provide the information to be changed. To reassign an
alias to another fleet, provide an updated routing strategy. If successful,
the updated alias record is returned.
<ul> <li> `CreateAlias`
</li> <li> `ListAliases`
</li> <li> `DescribeAlias`
</li> <li> `UpdateAlias`
</li> <li> `DeleteAlias`
</li> <li> `ResolveAlias`
</li> </ul>
"""
def update_alias(client, input, options \\ []) do
request(client, "UpdateAlias", input, options)
end
@doc """
Updates metadata in a build resource, including the build name and version.
To update the metadata, specify the build ID to update and provide the new
values. If successful, a build object containing the updated metadata is
returned.
**Learn more**
[ Upload a Custom Server
Build](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html)
**Related operations**
<ul> <li> `CreateBuild`
</li> <li> `ListBuilds`
</li> <li> `DescribeBuild`
</li> <li> `UpdateBuild`
</li> <li> `DeleteBuild`
</li> </ul>
"""
def update_build(client, input, options \\ []) do
request(client, "UpdateBuild", input, options)
end
@doc """
Updates fleet properties, including name and description, for a fleet. To
update metadata, specify the fleet ID and the property values that you want
to change. If successful, the fleet ID for the updated fleet is returned.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> Update fleets:
<ul> <li> `UpdateFleetAttributes`
</li> <li> `UpdateFleetCapacity`
</li> <li> `UpdateFleetPortSettings`
</li> <li> `UpdateRuntimeConfiguration`
</li> </ul> </li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def update_fleet_attributes(client, input, options \\ []) do
request(client, "UpdateFleetAttributes", input, options)
end
@doc """
Updates capacity settings for a fleet. Use this operation to specify the
number of EC2 instances (hosts) that you want this fleet to contain. Before
calling this operation, you may want to call `DescribeEC2InstanceLimits` to
get the maximum capacity based on the fleet's EC2 instance type.
Specify minimum and maximum number of instances. Amazon GameLift will not
change fleet capacity to values fall outside of this range. This is
particularly important when using auto-scaling (see `PutScalingPolicy`) to
allow capacity to adjust based on player demand while imposing limits on
automatic adjustments.
To update fleet capacity, specify the fleet ID and the number of instances
you want the fleet to host. If successful, Amazon GameLift starts or
terminates instances so that the fleet's active instance count matches the
desired instance count. You can view a fleet's current capacity information
by calling `DescribeFleetCapacity`. If the desired instance count is higher
than the instance type's limit, the "Limit Exceeded" exception occurs.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> Update fleets:
<ul> <li> `UpdateFleetAttributes`
</li> <li> `UpdateFleetCapacity`
</li> <li> `UpdateFleetPortSettings`
</li> <li> `UpdateRuntimeConfiguration`
</li> </ul> </li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def update_fleet_capacity(client, input, options \\ []) do
request(client, "UpdateFleetCapacity", input, options)
end
@doc """
Updates port settings for a fleet. To update settings, specify the fleet ID
to be updated and list the permissions you want to update. List the
permissions you want to add in `InboundPermissionAuthorizations`, and
permissions you want to remove in `InboundPermissionRevocations`.
Permissions to be removed must match existing fleet permissions. If
successful, the fleet ID for the updated fleet is returned.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> Update fleets:
<ul> <li> `UpdateFleetAttributes`
</li> <li> `UpdateFleetCapacity`
</li> <li> `UpdateFleetPortSettings`
</li> <li> `UpdateRuntimeConfiguration`
</li> </ul> </li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def update_fleet_port_settings(client, input, options \\ []) do
request(client, "UpdateFleetPortSettings", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Updates information about a registered game server to help GameLift FleetIQ
to track game server availability. This operation is called by a game
server process that is running on an instance in a game server group.
Use this operation to update the following types of game server
information. You can make all three types of updates in the same request:
<ul> <li> To update the game server's utilization status, identify the game
server and game server group and specify the current utilization status.
Use this status to identify when game servers are currently hosting games
and when they are available to be claimed.
</li> <li> To report health status, identify the game server and game
server group and set health check to `HEALTHY`. If a game server does not
report health status for a certain length of time, the game server is no
longer considered healthy. As a result, it will be eventually deregistered
from the game server group to avoid affecting utilization metrics. The best
practice is to report health every 60 seconds.
</li> <li> To change game server metadata, provide updated game server
data.
</li> </ul> Once a game server is successfully updated, the relevant
statuses and timestamps are updated.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `RegisterGameServer`
</li> <li> `ListGameServers`
</li> <li> `ClaimGameServer`
</li> <li> `DescribeGameServer`
</li> <li> `UpdateGameServer`
</li> <li> `DeregisterGameServer`
</li> </ul>
"""
def update_game_server(client, input, options \\ []) do
request(client, "UpdateGameServer", input, options)
end
@doc """
**This operation is used with the Amazon GameLift FleetIQ solution and game
server groups.**
Updates GameLift FleetIQ-specific properties for a game server group. Many
Auto Scaling group properties are updated on the Auto Scaling group
directly, including the launch template, Auto Scaling policies, and
maximum/minimum/desired instance counts.
To update the game server group, specify the game server group ID and
provide the updated values. Before applying the updates, the new values are
validated to ensure that GameLift FleetIQ can continue to perform instance
balancing activity. If successful, a `GameServerGroup` object is returned.
**Learn more**
[GameLift FleetIQ
Guide](https://docs.aws.amazon.com/gamelift/latest/fleetiqguide/gsg-intro.html)
**Related operations**
<ul> <li> `CreateGameServerGroup`
</li> <li> `ListGameServerGroups`
</li> <li> `DescribeGameServerGroup`
</li> <li> `UpdateGameServerGroup`
</li> <li> `DeleteGameServerGroup`
</li> <li> `ResumeGameServerGroup`
</li> <li> `SuspendGameServerGroup`
</li> <li> `DescribeGameServerInstances`
</li> </ul>
"""
def update_game_server_group(client, input, options \\ []) do
request(client, "UpdateGameServerGroup", input, options)
end
@doc """
Updates game session properties. This includes the session name, maximum
player count, protection policy, which controls whether or not an active
game session can be terminated during a scale-down event, and the player
session creation policy, which controls whether or not new players can join
the session. To update a game session, specify the game session ID and the
values you want to change. If successful, an updated `GameSession` object
is returned.
<ul> <li> `CreateGameSession`
</li> <li> `DescribeGameSessions`
</li> <li> `DescribeGameSessionDetails`
</li> <li> `SearchGameSessions`
</li> <li> `UpdateGameSession`
</li> <li> `GetGameSessionLogUrl`
</li> <li> Game session placements
<ul> <li> `StartGameSessionPlacement`
</li> <li> `DescribeGameSessionPlacement`
</li> <li> `StopGameSessionPlacement`
</li> </ul> </li> </ul>
"""
def update_game_session(client, input, options \\ []) do
request(client, "UpdateGameSession", input, options)
end
@doc """
Updates settings for a game session queue, which determines how new game
session requests in the queue are processed. To update settings, specify
the queue name to be updated and provide the new settings. When updating
destinations, provide a complete list of destinations.
**Learn more**
[ Using Multi-Region
Queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html)
**Related operations**
<ul> <li> `CreateGameSessionQueue`
</li> <li> `DescribeGameSessionQueues`
</li> <li> `UpdateGameSessionQueue`
</li> <li> `DeleteGameSessionQueue`
</li> </ul>
"""
def update_game_session_queue(client, input, options \\ []) do
request(client, "UpdateGameSessionQueue", input, options)
end
@doc """
Updates settings for a FlexMatch matchmaking configuration. These changes
affect all matches and game sessions that are created after the update. To
update settings, specify the configuration name to be updated and provide
the new settings.
**Learn more**
[ Design a FlexMatch
Matchmaker](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-configuration.html)
**Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def update_matchmaking_configuration(client, input, options \\ []) do
request(client, "UpdateMatchmakingConfiguration", input, options)
end
@doc """
Updates the current runtime configuration for the specified fleet, which
tells Amazon GameLift how to launch server processes on instances in the
fleet. You can update a fleet's runtime configuration at any time after the
fleet is created; it does not need to be in an `ACTIVE` status.
To update runtime configuration, specify the fleet ID and provide a
`RuntimeConfiguration` object with an updated set of server process
configurations.
Each instance in a Amazon GameLift fleet checks regularly for an updated
runtime configuration and changes how it launches server processes to
comply with the latest version. Existing server processes are not affected
by the update; runtime configuration changes are applied gradually as
existing processes shut down and new processes are launched during Amazon
GameLift's normal process recycling activity.
**Learn more**
[Setting up GameLift
Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-intro.html)
**Related operations**
<ul> <li> `CreateFleet`
</li> <li> `ListFleets`
</li> <li> `DeleteFleet`
</li> <li> `DescribeFleetAttributes`
</li> <li> Update fleets:
<ul> <li> `UpdateFleetAttributes`
</li> <li> `UpdateFleetCapacity`
</li> <li> `UpdateFleetPortSettings`
</li> <li> `UpdateRuntimeConfiguration`
</li> </ul> </li> <li> `StartFleetActions` or `StopFleetActions`
</li> </ul>
"""
def update_runtime_configuration(client, input, options \\ []) do
request(client, "UpdateRuntimeConfiguration", input, options)
end
@doc """
Updates Realtime script metadata and content.
To update script metadata, specify the script ID and provide updated name
and/or version values.
To update script content, provide an updated zip file by pointing to either
a local file or an Amazon S3 bucket location. You can use either method
regardless of how the original script was uploaded. Use the *Version*
parameter to track updates to the script.
If the call is successful, the updated metadata is stored in the script
record and a revised script is uploaded to the Amazon GameLift service.
Once the script is updated and acquired by a fleet instance, the new
version is used for all new game sessions.
**Learn more**
[Amazon GameLift Realtime
Servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/realtime-intro.html)
**Related operations**
<ul> <li> `CreateScript`
</li> <li> `ListScripts`
</li> <li> `DescribeScript`
</li> <li> `UpdateScript`
</li> <li> `DeleteScript`
</li> </ul>
"""
def update_script(client, input, options \\ []) do
request(client, "UpdateScript", input, options)
end
@doc """
Validates the syntax of a matchmaking rule or rule set. This operation
checks that the rule set is using syntactically correct JSON and that it
conforms to allowed property expressions. To validate syntax, provide a
rule set JSON string.
**Learn more**
<ul> <li> [Build a Rule
Set](https://docs.aws.amazon.com/gamelift/latest/developerguide/match-rulesets.html)
</li> </ul> **Related operations**
<ul> <li> `CreateMatchmakingConfiguration`
</li> <li> `DescribeMatchmakingConfigurations`
</li> <li> `UpdateMatchmakingConfiguration`
</li> <li> `DeleteMatchmakingConfiguration`
</li> <li> `CreateMatchmakingRuleSet`
</li> <li> `DescribeMatchmakingRuleSets`
</li> <li> `ValidateMatchmakingRuleSet`
</li> <li> `DeleteMatchmakingRuleSet`
</li> </ul>
"""
def validate_matchmaking_rule_set(client, input, options \\ []) do
request(client, "ValidateMatchmakingRuleSet", input, options)
end
@spec request(AWS.Client.t(), binary(), map(), list()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, action, input, options) do
client = %{client | service: "gamelift"}
host = build_host("gamelift", client)
url = build_url(host, client)
headers = [
{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "GameLift.#{action}"}
]
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
post(client, url, payload, headers, options)
end
defp post(client, url, payload, headers, options) do
case AWS.Client.request(client, :post, url, payload, headers, options) do
{:ok, %{status_code: 200, body: body} = response} ->
body = if body != "", do: decode!(client, body)
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
defp encode!(client, payload) do
AWS.Client.encode!(client, payload, :json)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/game_lift.ex
| 0.876634
| 0.730398
|
game_lift.ex
|
starcoder
|
defmodule Hitbtc.Socket do
alias Hitbtc.Socket.Conn
@type request_id :: binary
@doc """
Open new Websocket connection to HitBTC server
Parameter is process that will receive all notifications from WS
The `consumer_pid` of the process can be set using
the `consumer_pid` argument and defaults to the calling process.
"""
@spec open() :: {:ok, pid} | {:error, term}
@spec open(pid) :: {:ok, pid} | {:error, term}
def open(consumer_pid \\ nil), do: Conn.open(consumer_pid)
@doc """
Send login request to HitBTC
Example:
```elixir
iex(1)> {:ok, pid} = Hitbtc.Socket.open
{:ok, #PID<0.188.0>}
iex(2)> Hitbtc.Socket.login(pid, "test", "test")
"jghHemLxwTYA"
```
"""
@spec login(pid, binary, binary) :: request_id | {:error, term}
def login(pid, pKey, sKey),
do: Conn.request(pid, "login", %{algo: "BASIC", pKey: pKey, sKey: sKey})
@doc """
Subscribe to ticker
Example:
```elixir
iex(1)> {:ok, pid} = Hitbtc.Socket.open()
{:ok, #PID<0.188.0>}
iex(2)> Hitbtc.Socket.subscribe_ticker(pid, "BCHETH")
"QBYspPaGj2sA"
iex(3)> flush
{:frame, :response, %{id: "QBYspPaGj2sA", jsonrpc: "2.0", result: true}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "ticker",
params: %{
ask: "2.042597",
bid: "2.034390",
high: "2.220977",
last: "2.040262",
low: "2.019090",
open: "2.136375",
symbol: "BCHETH",
timestamp: "2018-05-03T13:01:37.580Z",
volume: "733.48",
volumeQuote: "1552.40983616"
}
}}
```
"""
@spec subscribe_ticker(pid, binary) :: request_id | {:error, term}
def subscribe_ticker(pid, symbol), do: Conn.request(pid, "subscribeTicker", %{symbol: symbol})
@doc """
Unsubscribe from ticker for symbol
Example:
```elixir
iex(1)> {:ok, pid} = Hitbtc.Socket.open
{:ok, #PID<0.188.0>}
iex(2)> Hitbtc.Socket.subscribe_ticker(pid, "BCHETH")
"2gLFWInOKgSy"
iex(3)> Hitbtc.Socket.unsubscribe_ticker(pid, "BCHETH")
"aCbL4oiPQ0AX"
iex(4)> flush
{:frame, :response, %{id: "2gLFWInOKgSy", jsonrpc: "2.0", result: true}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "ticker",
params: %{
ask: "2.028765",
bid: "2.021214",
high: "2.220977",
last: "2.027251",
low: "2.019090",
open: "2.138210",
symbol: "BCHETH",
timestamp: "2018-05-03T13:06:04.531Z",
volume: "730.40",
volumeQuote: "1545.68635928"
}
}}
{:frame, :response, %{id: "aCbL4oiPQ0AX", jsonrpc: "2.0", result: true}}
:ok
```
"""
@spec unsubscribe_ticker(pid, binary) :: request_id | {:error, term}
def unsubscribe_ticker(pid, symbol),
do: Conn.request(pid, "unsubscribeTicker", %{symbol: symbol})
@doc """
Subscribe to orders for symbol
Example:
```elixir
iex> {:ok, pid} = Hitbtc.Socket.open
{:ok, #PID<0.188.0>}
iex> Hitbtc.Socket.subscribe_orderbook(pid, "BCHETH")
"La3I7JKC3JkJ"
iex> flush
{:frame, :response, %{id: "La3I7JKC3JkJ", jsonrpc: "2.0", result: true}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "snapshotOrderbook",
params: %{
ask: [
%{price: "2.033071", size: "0.03"},
%{price: "2.033072", size: "0.13"},
%{price: "2.033237", size: "0.21"},
...
],
bid: [
%{price: "2.027210", size: "0.08"},
%{price: "2.027209", size: "1.85"},
%{price: "2.024906", size: "1.85"},
...
],
sequence: 36635757,
symbol: "BCHETH"
}
}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "updateOrderbook",
params: %{
ask: [
%{price: "2.036203", size: "0.00"},
%{price: "2.037233", size: "0.92"}
],
bid: [],
sequence: 36635758,
symbol: "BCHETH"
}
}}
:ok
```
"""
@spec subscribe_orderbook(pid, binary) :: request_id | {:error, term}
def subscribe_orderbook(pid, symbol),
do: Conn.request(pid, "subscribeOrderbook", %{symbol: symbol})
@doc """
Unsubscribe from orderbook for symbol
Example:
```elixir
iex> {:ok, pid} = Hitbtc.Socket.open
{:ok, #PID<0.188.0>}
iex> Hitbtc.Socket.subscribe_orderbook(pid, "BCHETH")
"La3I7JKC3JkJ"
iex> Hitbtc.Socket.unsubscribe_orderbook(pid, "BCHETH")
"t4xGn6mIDdU1"
iex> flush
{:frame, :response, %{id: "La3I7JKC3JkJ", jsonrpc: "2.0", result: true}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "snapshotOrderbook",
params: %{
ask: [
%{price: "2.033071", size: "0.03"},
%{price: "2.033072", size: "0.13"},
%{price: "2.033237", size: "0.21"},
...
],
bid: [
%{price: "2.027210", size: "0.08"},
%{price: "2.027209", size: "1.85"},
%{price: "2.024906", size: "1.85"},
...
],
sequence: 36635757,
symbol: "BCHETH"
}
}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "updateOrderbook",
params: %{
ask: [
%{price: "2.036203", size: "0.00"},
%{price: "2.037233", size: "0.92"}
],
bid: [],
sequence: 36635758,
symbol: "BCHETH"
}
}}
{:frame, :response, %{id: "t4xGn6mIDdU1", jsonrpc: "2.0", result: true}}
:ok
```
"""
@spec unsubscribe_orderbook(pid, binary) :: request_id | {:error, term}
def unsubscribe_orderbook(pid, symbol),
do: Conn.request(pid, "unsubscribeOrderbook", %{symbol: symbol})
@doc """
Subscribe to trades for symbol
Example:
```elixir
iex> Hitbtc.Socket.subscribe_trades(pid, "BCHETH")
"f4vONb_htaii"
iex> flush
{:frame, :response, %{id: "f4vONb_htaii", jsonrpc: "2.0", result: true}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "snapshotTrades",
params: %{
data: [
%{
id: 282166637,
price: "2.122786",
quantity: "0.05",
side: "buy",
timestamp: "2018-05-02T23:30:09.053Z"
},
%{id: 282176290, price: "2.119886", quantity: "0.01", side: "sell", ...},
%{id: 282176310, price: "2.121108", quantity: "0.10", ...},
%{id: 282176615, price: "2.118504", ...},
%{id: 282176715, ...},
%{...},
...
],
symbol: "BCHETH"
}
}}
:ok
```
"""
@spec subscribe_trades(pid, binary) :: request_id | {:error, term}
def subscribe_trades(pid, symbol), do: Conn.request(pid, "subscribeTrades", %{symbol: symbol})
@doc """
Unsubscribe from trades for symbol
Example:
```elixir
iex> Hitbtc.Socket.subscribe_trades(pid, "BCHETH")
"f4vONb_htaii"
iex> Hitbtc.Socket.unsubscribe_trades(pid, "BCHETH")
"2imaJqNUCbfW"
iex> flush
{:frame, :response, %{id: "f4vONb_htaii", jsonrpc: "2.0", result: true}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "snapshotTrades",
params: %{
data: [
%{
id: 282166637,
price: "2.122786",
quantity: "0.05",
side: "buy",
timestamp: "2018-05-02T23:30:09.053Z"
},
%{id: 282176290, price: "2.119886", quantity: "0.01", side: "sell", ...},
%{id: 282176310, price: "2.121108", quantity: "0.10", ...},
%{id: 282176615, price: "2.118504", ...},
%{id: 282176715, ...},
%{...},
...
],
symbol: "BCHETH"
}
}}
{:frame, :response, %{id: "2imaJqNUCbfW", jsonrpc: "2.0", result: true}}
:ok
```
"""
@spec unsubscribe_trades(pid, binary) :: request_id | {:error, term}
def unsubscribe_trades(pid, symbol),
do: Conn.request(pid, "unsubscribeTrades", %{symbol: symbol})
@doc """
Subscribe to candles for symbol + period
Example:
```elixir
iex> Hitbtc.Socket.subscribe_candles(pid, "BCHETH", "M30")
"qZJ_-6CqjIvV"
iex> flush
{:frame, :response, %{id: "qZJ_-6CqjIvV", jsonrpc: "2.0", result: true}}
{:frame, :response,
%{
jsonrpc: "2.0",
method: "snapshotCandles",
params: %{
data: [
%{
close: "1.754573",
max: "1.780656",
min: "1.711121",
open: "1.714521",
timestamp: "2018-04-03T13:00:00.000Z",
volume: "125.36",
volumeQuote: "219.62493994"
},
%{close: "1.701630", max: "1.706478", min: "1.699430", ...},
%{close: "1.697734", max: "1.706294", ...},
%{close: "1.708355", ...},
%{...},
...
],
period: "M30",
symbol: "BCHETH"
}
}}
:ok
```
"""
@spec subscribe_candles(pid, binary, binary) :: request_id | {:error, term}
def subscribe_candles(pid, symbol, period \\ "M30"),
do: Conn.request(pid, "subscribeCandles", %{symbol: symbol, period: period})
@doc """
Unsubscribe from candles for symbol + period
Example:
```elixir
iex> Hitbtc.Socket.unsubscribe_candles(pid, "BCHETH", "M30")
"509GAmNnJzwY"
iex(8)> flush
{:frame, :response, %{id: "509GAmNnJzwY", jsonrpc: "2.0", result: true}}
:ok
```
"""
@spec unsubscribe_candles(pid, binary, binary) :: request_id | {:error, term}
def unsubscribe_candles(pid, symbol, period \\ "M30"),
do: Conn.request(pid, "unsubscribeCandles", %{symbol: symbol, period: period})
@doc """
Subscribe to all system reports.
**Require authorization**
Notification with active orders send after subscription or on any service maintain
Example:
```elixir
iex(1)> {:ok, pid} = Hitbtc.Socket.open
{:ok, #PID<0.188.0>}
iex(2)> Hitbtc.Socket.subscribe_reports(pid)
"k8CrqluaD6EW"
iex(3)> flush
{:frame, :response,
%{
error: %{code: 1001, description: "", message: "Authorization required"},
id: "k8CrqluaD6EW",
jsonrpc: "2.0"
}}
:ok
```
"""
@spec subscribe_reports(pid) :: request_id | {:error, term}
def subscribe_reports(pid),
do: Conn.request(pid, "subscribeReports", %{})
end
|
lib/hitbtc/socket.ex
| 0.832441
| 0.601067
|
socket.ex
|
starcoder
|
defmodule CoursePlanner.Classes do
@moduledoc """
This module provides custom functionality for controller over the model
"""
import Ecto.Changeset
import Ecto.Query
alias CoursePlanner.{Repo, Classes.Class, Notifications.Notifier, Notifications, Settings}
alias CoursePlanner.Terms.Term
alias Ecto.{Changeset, DateTime, Date}
@notifier Application.get_env(:course_planner, :notifier, Notifier)
def all do
query = from t in Term,
join: oc in assoc(t, :offered_courses),
join: co in assoc(oc, :course),
join: c in assoc(oc, :classes),
preload: [offered_courses: {oc, classes: c, course: co}],
order_by: [desc: t.start_date, desc: co.name, desc: c.date,
desc: c.starting_at, desc: c.finishes_at]
Repo.all(query)
end
def new do
Class.changeset(%Class{})
end
def get(id) do
case Repo.get(Class, id) do
nil -> {:error, :not_found}
class -> {:ok, class}
end
end
def edit(id) do
case get(id) do
{:ok, class} -> {:ok, class, Class.changeset(class)}
error -> error
end
end
def create(params) do
%Class{}
|> Class.changeset(params, :create)
|> Repo.insert()
end
def update(id, params) do
case get(id) do
{:ok, class} ->
class
|> Class.changeset(params, :update)
|> Repo.update()
|> format_error(class)
error -> error
end
end
def validate_for_holiday(%{valid?: true} = changeset) do
class_date = changeset |> Changeset.get_field(:date) |> Date.cast!
offered_course_id = changeset |> Changeset.get_field(:offered_course_id)
term = Repo.one(from t in Term,
join: oc in assoc(t, :offered_courses),
where: oc.id == ^offered_course_id)
class_on_holiday? =
Enum.find(term.holidays, fn(holiday) ->
holiday.date
|> Date.cast!
|> Date.compare(class_date)
|> Kernel.==(:eq)
end)
if class_on_holiday? do
add_error(changeset, :date, "Cannot create a class on holiday")
else
changeset
end
end
def validate_for_holiday(changeset), do: changeset
def delete(id) do
case get(id) do
{:ok, class} -> Repo.delete(class)
error -> error
end
end
def notify_class_students(class, current_user, notification_type, path \\ "/") do
class
|> get_subscribed_students()
|> Enum.reject(fn %{id: id} -> id == current_user.id end)
|> Enum.each(&(notify_user(&1, notification_type, path)))
end
def notify_user(user, type, path) do
Notifications.new()
|> Notifications.type(type)
|> Notifications.resource_path(path)
|> Notifications.to(user)
|> @notifier.notify_later()
end
defp get_subscribed_students(class) do
class = class
|> Repo.preload([:offered_course, offered_course: :students])
class.offered_course.students
end
def get_offered_course_classes(offered_course_id) do
Repo.all(from c in Class, where: c.offered_course_id == ^offered_course_id)
end
def classes_with_attendances(offered_course_id, user_id) do
query = from c in Class,
left_join: a in assoc(c, :attendances), on: a.student_id == ^user_id,
where: c.offered_course_id == ^offered_course_id,
order_by: [c.date, c.starting_at],
select: %{
classroom: c.classroom,
date: c.date,
starting_at: c.starting_at,
attendance_type: a.attendance_type
}
Repo.all(query)
end
def sort_by_starting_time(classes) do
Enum.sort(classes, fn (class_a, class_b) ->
class_a_datetime = DateTime.from_date_and_time(class_a.date, class_a.starting_at)
class_b_datetime = DateTime.from_date_and_time(class_b.date, class_b.starting_at)
DateTime.compare(class_a_datetime, class_b_datetime) == :lt
end)
end
def split_past_and_next(classes) do
now = Settings.utc_to_system_timezone(Timex.now())
{reversed_past_classes, next_classes} =
Enum.split_with(classes, &(compare_class_date_time(&1, now)))
{Enum.reverse(reversed_past_classes), next_classes}
end
defp format_error({:ok, class}, _), do: {:ok, class}
defp format_error({:error, changeset}, class), do: {:error, class, changeset}
defp compare_class_date_time(class, now) do
class_datetime =
class.date
|> DateTime.from_date_and_time(class.starting_at)
|> Settings.utc_to_system_timezone()
Timex.compare(class_datetime, now) == -1
end
end
|
lib/course_planner/classes/classes.ex
| 0.71123
| 0.413033
|
classes.ex
|
starcoder
|
defmodule RigOutboundGateway.Kafka.GroupSubscriber do
@moduledoc """
A group subscriber that handles all assignments (i.e., all topic-partitions
this group subscriber is assigned to by the broker). Incoming messages are
handled in partition handlers that run in subprocesses (they're spawned in
`init`).
Scalability and failover is achieved by:
- having an even distribution of topic-partitions among the nodes in the
cluster by means of the Kafka group subscriber protocol;
- having topic-paritions re-assigned to other group subscribers in the
cluster automatically whenever this node goes down;
- periodically committing the offsets to Kafka, so another node will continue
consuming (almost) at the right offset.
From brod_group_subscriber:
A group subscriber is a gen_server which subscribes to partition consumers
(poller) and calls the user-defined callback functions for message
processing.
An overview of what it does behind the scene:
1. Start a consumer group coordinator to manage the consumer group states,
see `:brod_group_coordinator.start_link/4`.
2. Start (if not already started) topic-consumers (pollers) and subscribe
to the partition workers when group assignment is received from the
group leader, see `:brod.start_consumer/3`.
3. Call `CallbackModule:handle_message/4` when messages are received from
the partition consumers.
4. Send acknowledged offsets to group coordinator which will be committed
to kafka periodically.
"""
use Rig.Config, [:brod_client_id, :consumer_group, :source_topics]
require Logger
alias RigOutboundGateway.Kafka.MessageHandler
@behaviour :brod_group_subscriber
@type handlers_t :: %{required(String.t()) => nonempty_list(pid)}
@type state_t :: %{handlers: handlers_t}
@doc """
Makes sure the Brod client is running and starts the group subscriber.
"""
def start_link do
conf = config()
:brod.start_link_group_subscriber(
conf.brod_client_id,
conf.consumer_group,
conf.source_topics,
_group_config = [rejoin_delay_seconds: 5],
_consumer_config = [begin_offset: :latest],
_callback_module = __MODULE__,
_callback_init_args = :no_args
)
end
## Server callbacks
@doc """
brod_group_subscriber callback
We spawn one message handler for each topic-partition.
"""
@impl :brod_group_subscriber
@spec init(String.t(), :no_args) :: {:ok, state_t}
def init(_consumer_group_id, :no_args) do
conf = config()
Logger.info("Starting Kafka group subscriber (config=#{inspect(conf)})")
handlers = spawn_message_handlers(conf.brod_client_id, conf.source_topics)
{:ok, %{handlers: handlers}}
end
@doc """
brod_group_subscriber callback
We receive the message here and hand it off to the respective message handler.
"""
@impl :brod_group_subscriber
@spec handle_message(String.t(), String.t() | non_neg_integer, String.t(), state_t) ::
{:ok, state_t}
def handle_message(topic, partition, message, %{handlers: handlers} = state) do
handler_pid = handlers["#{topic}-#{partition}"]
send(handler_pid, message)
{:ok, state}
end
@spec spawn_message_handlers(String.t(), nonempty_list(String.t())) :: handlers_t
defp spawn_message_handlers(brod_client_id, [topic | remaining_topics]) do
{:ok, n_partitions} = :brod.get_partitions_count(brod_client_id, topic)
spawn_for_partition =
&spawn_link(MessageHandler, :message_handler_loop, [
topic,
_partition = &1,
_group_subscriber_pid = self()
])
0..(n_partitions - 1)
|> Enum.reduce(%{}, fn partition, acc ->
handler_pid = spawn_for_partition.(partition)
Map.put(acc, "#{topic}-#{partition}", handler_pid)
end)
|> Map.merge(spawn_message_handlers(brod_client_id, remaining_topics))
end
defp spawn_message_handlers(_brod_client_id, []) do
%{}
end
end
|
apps/rig_outbound_gateway/lib/rig_outbound_gateway/kafka/group_subscriber.ex
| 0.811676
| 0.488893
|
group_subscriber.ex
|
starcoder
|
defmodule Exnoops.Mazebot do
@moduledoc """
Module to interact with Github's Noop: Mazebot
See the [official `noop` documentation](https://noopschallenge.com/challenges/mazebot) for API information including the accepted parameters.
"""
require Logger
import Exnoops.API
@noop "mazebot"
@doc ~S"""
Query Mazbot for random maze
+ Parameters are sent with a keyword list into the function.
## Examples
iex> Exnoops.Mazebot.get_maze()
{:ok, %{
"name" => "Maze #236 (10x10)",
"mazePath" => "/mazebot/mazes/ikTcNQMwKhux3bWjV3SSYKfyaVHcL0FXsvbwVGk5ns8",
"startingPosition" => [ 4, 3 ],
"endingPosition" => [ 3, 6 ],
"message" => "When you have figured out the solution, post it back to this url. See the exampleSolution for more information.",
"exampleSolution" => %{ "directions" => "ENWNNENWNNS" },
"map" => [
[ " ", " ", "X", " ", " ", " ", "X", " ", "X", "X" ],
[ " ", "X", " ", " ", " ", " ", " ", " ", " ", " " ],
[ " ", "X", " ", "X", "X", "X", "X", "X", "X", " " ],
[ " ", "X", " ", " ", "A", " ", " ", " ", "X", " " ],
[ " ", "X", "X", "X", "X", "X", "X", "X", " ", " " ],
[ "X", " ", " ", " ", "X", " ", " ", " ", "X", " " ],
[ " ", " ", "X", "B", "X", " ", "X", " ", "X", " " ],
[ " ", " ", "X", " ", "X", " ", "X", " ", " ", " " ],
[ "X", " ", "X", "X", "X", "X", "X", " ", "X", "X" ],
[ "X", " ", " ", " ", " ", " ", " ", " ", "X", "X" ]
]
}}
iex> Exnoops.Mazebot.get_maze([minSize: 10, maxSize: 20])
{:ok, %{
"name" => "Maze #142 (10x10)",
"mazePath" => "/mazebot/mazes/dTXurZOonsCbWC9_PDBWpiRAvBME3VBDIf9hcwwCdNc",
"startingPosition" => [9, 3],
"endingPosition" => [7, 0],
"message" => "When you have figured out the solution, post it back to this url. See the exampleSolution for more information.",
"exampleSolution" => %{ "directions" => "ENWNNENWNNS" },
"map" => [
[ "X", " ", " ", " ", " ", " ", "X", "B", " ", " " ],
[ " ", " ", " ", " ", "X", " ", " ", " ", "X", " " ],
[ " ", "X", "X", "X", " ", "X", "X", "X", " ", "X" ],
[ " ", " ", " ", " ", "X", " ", " ", "X", " ", "A" ],
[ " ", "X", "X", "X", " ", "X", " ", "X", " ", " " ],
[ " ", " ", " ", "X", " ", "X", " ", "X", " ", " " ],
[ " ", "X", " ", "X", " ", "X", " ", "X", " ", "X" ],
[ " ", " ", " ", "X", " ", "X", " ", "X", " ", " " ],
[ "X", " ", "X", "X", " ", " ", " ", " ", " ", " " ],
[ "X", " ", " ", " ", " ", "X", " ", " ", " ", "X" ]
]
}}
"""
@spec get_maze(keyword()) :: {atom(), map()}
def get_maze(opts \\ []) when is_list(opts) do
Logger.debug("Calling Mazebot.get_maze()")
case get("/" <> @noop <> "/random", opts) do
{:ok, _} = res -> res
error -> error
end
end
@doc ~S"""
Query Mazebot for race mazes
## Exampes
iex> Exnoops.Mazebot.get_race("/mazebot/race/Fh5Kt7l9gMQr41GvWkmoCg")
{:ok, %{
"name" => "Mazebot 500 Stage#1 (5x5)",
"mazePath" => "/mazebot/race/Fh5Kt7l9gMQr41GvWkmoCg",
"map" => [
[ "A", " ", " ", " ", " " ],
[ " ", "X", "X", "X", " " ],
[ " ", " ", "X", " ", " " ],
[ "X", " ", "X", "B", "X" ],
[ "X", " ", " ", " ", "X" ]
],
"message" => "When you have figured out the solution, post it back to this url in JSON format. See the exampleSolution for more information.",
"startingPosition" => [0, 0],
"endingPosition" => [3, 3],
"exampleSolution" => %{ "directions" => "ENWNNENWNNS" }
}}
"""
@spec get_race(String.t()) :: {atom(), map()}
def get_race(path) when is_binary(path) do
Logger.debug("Calling Mazebot.get_race()")
case get(path, []) do
{:ok, _} = res -> res
error -> error
end
end
@doc """
Starts a maze race with Mazebot
## Examples
iex> Exnoops.Mazebot.start_race("yourgithubloginhere")
{:ok, %{
"message" => "Start your engines!",
"nextMaze" => "/mazebot/race/iEGpDT1I0qFzGU81yb49JY3Srj1daT70P6e-Zr6bpR0"
}}
"""
@spec start_race(binary()) :: {atom(), map()}
def start_race(username) when is_binary(username) do
Logger.debug("Calling Mazebot.start_race(#{username})")
case post("/mazebot/race/start", %{"login" => username}) do
{:ok, _} = res -> res
error -> error
end
end
@doc ~S"""
Submits an answer to a race
Takes in race url and string of solved directions
Returns an outer tuple to denote status of the HTTP response and inner tuple for status of maze solution.
## Examples
iex> Exnoops.Mazebot.submit_maze("/mazebot/mazes/dTXurZOonsCbWC9_PDBWpiRAvBME3VBDIf9hcwwCdNc", "ENNNN....")
{:ok, {:ok, %{
"result" => "success",
"message" => "You solved it in 0.029 seconds with 56 steps, the shortest possible solution.",
"shortestSolutionLength" => 56,
"yourSolutionLength" => 56,
"elapsed" => 29
}}}
iex> Exnoops.Mazebot.submit_maze("/mazebot/mazes/17pSAsql1EEaCvEe28UnAQ", "ESS")
{:ok, {:error, %{"message" => "Hit a wall at directions[1]", "result" => "failed"}}}
"""
@spec submit_maze(String.t(), String.t()) :: {atom(), {atom(), map()}}
def submit_maze(path, directions) when is_binary(path) and is_binary(directions) do
Logger.debug("Calling Mazebot.submit_maze()")
case post(path, %{"directions" => directions}) do
{:ok, %{"result" => "success"} = res} -> {:ok, {:ok, res}}
{:ok, %{"result" => "failed"} = res} -> {:ok, {:error, res}}
error -> error
end
end
end
|
lib/exnoops/mazebot.ex
| 0.587233
| 0.561275
|
mazebot.ex
|
starcoder
|
defmodule XlsxReader do
@moduledoc """
Opens XLSX workbook and reads its worksheets.
## Example
```elixir
{:ok, package} = XlsxReader.open("test.xlsx")
XlsxReader.sheet_names(package)
# ["Sheet 1", "Sheet 2", "Sheet 3"]
{:ok, rows} = XlsxReader.sheet(package, "Sheet 1")
# [
# ["Date", "Temperature"],
# [~D[2019-11-01], 8.4],
# [~D[2019-11-02], 7.5],
# ...
# ]
```
## Sheet contents
Sheets are loaded on-demand by `sheet/3` and `sheets/2`.
The sheet contents is returned as a list of lists:
```elixir
[
["A1", "B1", "C1" | _],
["A2", "B2", "C2" | _],
["A3", "B3", "C3" | _],
| _
]
```
The behavior of the sheet parser can be customized for each
individual sheet, see `sheet/3`.
## Cell types
This library takes a best effort approach for determining cell types.
In order of priority, the actual type of an XLSX cell value is determined using:
1. basic cell properties (e.g. boolean)
2. predefined known styles (e.g. default money/date formats)
3. introspection of the [custom format string](https://support.microsoft.com/en-us/office/number-format-codes-5026bbd6-04bc-48cd-bf33-80f18b4eae68) associated with the cell
### Custom formats supported by default
* percentages
* ISO 8601 date/time (y-m-d)
* US date/time (m/d/y)
* European date/time (d/m/y)
### Additional custom formats support
If the spreadsheet you need to process contains some unusual cell formatting, you
may provide hints to map format strings to a known cell type.
The hints are given as a list of `{matcher, type}` tuples. The matcher is either a
string or regex to match against the custom format string. The supported types are:
* `:string`
* `:number`
* `:percentage`
* `:date`
* `:time`
* `:date_time`
* `:unsupported` (used for explicitly unsupported styles and formats)
#### Example
```elixir
[
{"mmm yy", :date},
{~r/mmm? yy hh:mm/, :date_time},
{"[$CHF]0.00", :number}
]
```
To find out what custom formats are in use in the workbook, you can inspect `package.workbook.custom_formats`:
```elixir
# num_fmt_id => format string
%{
"0" => "General",
"59" => "dd/mm/yyyy",
"60" => "dd/mm/yyyy hh:mm",
"61" => "hh:mm",
"62" => "0.0%",
"63" => "[$CHF]0.00"
}
```
"""
alias XlsxReader.{PackageLoader, ZipArchive}
@typedoc """
Source for the XLSX file: file system (`:path`) or in-memory (`:binary`)
"""
@type source :: :path | :binary
@typedoc """
Option to specify the XLSX file source
"""
@type source_option :: {:source, source()}
@typedoc """
List of cell values
"""
@type row :: list(any())
@typedoc """
List of rows
"""
@type rows :: list(row())
@typedoc """
Sheet name
"""
@type sheet_name :: String.t()
@typedoc """
Error tuple with message describing the cause of the error
"""
@type error :: {:error, String.t()}
@doc """
Opens an XLSX file located on the file system (default) or from memory.
## Examples
### Opening XLSX file on the file system
```elixir
{:ok, package} = XlsxReader.open("test.xlsx")
```
### Opening XLSX file from memory
```elixir
blob = File.read!("test.xlsx")
{:ok, package} = XlsxReader.open(blob, source: :binary)
```
## Options
* `source`: `:path` (on the file system, default) or `:binary` (in memory)
* `supported_custom_formats`: a list of `{regex | string, type}` tuples (see "Additional custom formats support")
"""
@spec open(String.t() | binary(), [source_option]) ::
{:ok, XlsxReader.Package.t()} | error()
def open(file, options \\ []) do
file
|> ZipArchive.handle(Keyword.get(options, :source, :path))
|> PackageLoader.open(Keyword.take(options, [:supported_custom_formats]))
end
@doc """
Lists the names of the sheets in the package's workbook
"""
@spec sheet_names(XlsxReader.Package.t()) :: [sheet_name()]
def sheet_names(package) do
for %{name: name} <- package.workbook.sheets, do: name
end
@doc """
Loads the sheet with the given name (see `sheet_names/1`)
## Options
* `type_conversion` - boolean (default: `true`)
* `blank_value` - placeholder value for empty cells (default: `""`)
* `empty_rows` - include empty rows (default: `true`)
* `number_type` - type used for numeric conversion :`Integer`, `Decimal` or `Float` (default: `Float`)
* `skip_row?`: function callback that determines if a row should be skipped.
Takes precedence over `blank_value` and `empty_rows`.
Defaults to `nil` (keeping the behaviour of `blank_value` and `empty_rows`).
The `Decimal` type requires the [decimal](https://github.com/ericmj/decimal) library.
## Examples
### Skipping rows
When using the `skip_row?` callback, rows are ignored in the parser which is more memory efficient.
```elixir
# Skip all rows for which all the values are either blank or "-"
XlsxReader.sheet(package, "Sheet1", skip_row?: fn row ->
Enum.all?(row, & String.trim(&1) in ["", "-"])
end)
# Skip all rows for which the first column contains the text "disabled"
XlsxReader.sheet(package, "Sheet1", skip_row?: fn [column | _] ->
column == "disabled"
end)
```
"""
@spec sheet(XlsxReader.Package.t(), sheet_name(), Keyword.t()) :: {:ok, rows()}
def sheet(package, sheet_name, options \\ []) do
PackageLoader.load_sheet_by_name(package, sheet_name, options)
end
@doc """
Loads all the sheets in the workbook.
On success, returns `{:ok, [{sheet_name, rows}, ...]}`.
## Filtering options
* `only` - include the sheets whose name matches the filter
* `except` - exclude the sheets whose name matches the filter
Sheets can filtered by name using:
* a string (e.g. `"Exact Match"`)
* a regex (e.g. `~r/Sheet \d+/`)
* a list of string and/or regexes (e.g. `["Parameters", ~r/Sheet [12]/]`)
## Sheet options
See `sheet/2`.
"""
@spec sheets(XlsxReader.Package.t(), Keyword.t()) ::
{:ok, list({sheet_name(), rows()})} | error()
def sheets(package, options \\ []) do
package.workbook.sheets
|> filter_sheets_by_name(
sheet_filter_option(options, :only),
sheet_filter_option(options, :except)
)
|> Enum.reduce_while([], fn sheet, acc ->
case PackageLoader.load_sheet_by_rid(package, sheet.rid, options) do
{:ok, rows} ->
{:cont, [{sheet.name, rows} | acc]}
error ->
{:halt, error}
end
end)
|> case do
sheets when is_list(sheets) ->
{:ok, Enum.reverse(sheets)}
error ->
error
end
end
@doc """
Loads all the sheets in the workbook concurrently.
On success, returns `{:ok, [{sheet_name, rows}, ...]}`.
When processing files with multiple sheets, `async_sheets/3` is ~3x faster than `sheets/2`
but it comes with a caveat. `async_sheets/3` uses `Task.async_stream/3` under the hood and thus
runs each concurrent task with a timeout. If you expect your dataset to be of a significant size,
you may want to increase it from the default 10000ms (see "Concurrency options" below).
If the order in which the sheets are returned is not relevant for your application, you can
pass `ordered: false` (see "Concurrency options" below) for a modest speed gain.
## Filtering options
See `sheets/2`.
## Sheet options
See `sheet/2`.
## Concurrency options
* `max_concurrency` - maximum number of tasks to run at the same time (default: `System.schedulers_online/0`)
* `ordered` - maintain order consistent with `sheet_names/1` (default: `true`)
* `timeout` - maximum duration in milliseconds to process a sheet (default: `10_000`)
"""
def async_sheets(package, sheet_options \\ [], task_options \\ []) do
max_concurrency = Keyword.get(task_options, :max_concurrency, System.schedulers_online())
ordered = Keyword.get(task_options, :ordered, true)
timeout = Keyword.get(task_options, :timeout, 10_000)
package.workbook.sheets
|> filter_sheets_by_name(
sheet_filter_option(sheet_options, :only),
sheet_filter_option(sheet_options, :except)
)
|> Task.async_stream(
fn sheet ->
case PackageLoader.load_sheet_by_rid(package, sheet.rid, sheet_options) do
{:ok, rows} ->
{:ok, {sheet.name, rows}}
error ->
error
end
end,
max_concurrency: max_concurrency,
ordered: ordered,
timeout: timeout,
on_timeout: :kill_task
)
|> Enum.reduce_while({:ok, []}, fn
{:ok, {:ok, entry}}, {:ok, acc} ->
{:cont, {:ok, [entry | acc]}}
{:ok, error}, _acc ->
{:halt, {:error, error}}
{:exit, :timeout}, _acc ->
{:halt, {:error, "timeout exceeded"}}
{:exit, reason}, _acc ->
{:halt, {:error, reason}}
end)
|> case do
{:ok, list} ->
if ordered,
do: {:ok, Enum.reverse(list)},
else: {:ok, list}
error ->
error
end
end
## Sheet filter
def sheet_filter_option(options, key),
do: options |> Keyword.get(key, []) |> List.wrap()
defp filter_sheets_by_name(sheets, [], []), do: sheets
defp filter_sheets_by_name(sheets, only, except) do
Enum.filter(sheets, fn %{name: name} ->
filter_only?(name, only) && !filter_except?(name, except)
end)
end
defp filter_only?(_name, []), do: true
defp filter_only?(name, filters), do: Enum.any?(filters, &filter_match?(name, &1))
defp filter_except?(_name, []), do: false
defp filter_except?(name, filters), do: Enum.any?(filters, &filter_match?(name, &1))
defp filter_match?(name, %Regex{} = regex), do: String.match?(name, regex)
defp filter_match?(exact_match, exact_match) when is_binary(exact_match), do: true
defp filter_match?(_, _), do: false
end
|
lib/xlsx_reader.ex
| 0.944074
| 0.906942
|
xlsx_reader.ex
|
starcoder
|
defmodule Oban.Worker do
@moduledoc """
Defines a behavior and macro to guide the creation of worker modules.
Worker modules do the work of processing a job. At a minimum they must define a `perform/1`
function, which will be called with an `args` map.
## Defining Workers
Define a worker to process jobs in the `events` queue:
defmodule MyApp.Workers.Business do
use Oban.Worker, queue: "events", max_attempts: 10
@impl Oban.Worker
def perform(args) do
IO.inspect(args)
end
end
The `perform/1` function will always receive the jobs `args` map. In this example the worker
will simply inspect any arguments that are provided. Note that the return value isn't important.
If `perform/1` returns without raising an exception the job is considered complete.
## Enqueuing Jobs
All workers implement a `new/2` function that converts an args map into a job changeset
suitable for inserting into the database for later execution:
%{in_the: "business", of_doing: "business"}
|> MyApp.Workers.Business.new()
|> MyApp.Repo.insert()
The worker's defaults may be overridden by passing options:
%{vote_for: "none of the above"}
|> MyApp.Workers.Business.new(queue: "special", max_attempts: 5)
|> MyApp.Repo.insert()
See `Oban.Job` for all available options.
"""
alias Oban.Job
@doc """
Build a job changeset for this worker with optional overrides.
See `Oban.Job.new/2` for the available options.
"""
@callback new(args :: Job.args(), opts :: [Job.option()]) :: Ecto.Changeset.t()
@doc """
The `perform/1` function is called when the job is executed.
The function is passed a job's args, which is always a map with string keys.
The return value is not important. If the function executes without raising an exception it is
considered a success. If the job raises an exception it is a failure and the job may be
scheduled for a retry.
"""
@callback perform(args :: map()) :: term()
@doc false
defmacro __using__(opts) do
quote location: :keep do
alias Oban.{Job, Worker}
@behaviour Worker
@opts unquote(opts)
|> Keyword.take([:queue, :max_attempts])
|> Keyword.put(:worker, to_string(__MODULE__))
@impl Worker
def new(args, opts \\ []) when is_map(args) do
Job.new(args, Keyword.merge(@opts, opts))
end
@impl Worker
def perform(args) when is_map(args) do
:ok
end
defoverridable Worker
end
end
end
|
lib/oban/worker.ex
| 0.82573
| 0.561425
|
worker.ex
|
starcoder
|
defmodule Ecto.Query.JoinBuilder do
@moduledoc false
alias Ecto.Query.BuilderUtil
alias Ecto.Query.Query
alias Ecto.Query.QueryExpr
alias Ecto.Query.JoinExpr
@doc """
Escapes a join expression (not including the `on` expression).
It returns a tuple containing the binds, the on expression (if available)
and the association expression.
## Examples
iex> escape(quote(do: x in "foo"), [])
{ :x, "foo", nil }
iex> escape(quote(do: "foo"), [])
{ nil, "foo", nil }
iex> escape(quote(do: x in Sample), [])
{ :x, { :__aliases__, [alias: false], [:Sample] }, nil }
iex> escape(quote(do: c in p.comments), [:p])
{ :c, nil, {{:{}, [], [:&, [], [0]]}, :comments} }
"""
@spec escape(Macro.t, [atom]) :: { [atom], Macro.t | nil, Macro.t | nil }
def escape({ :in, _, [{ var, _, context }, expr] }, vars)
when is_atom(var) and is_atom(context) do
escape(expr, vars) |> set_elem(0, var)
end
def escape({ :in, _, [{ var, _, context }, expr] }, vars)
when is_atom(var) and is_atom(context) do
escape(expr, vars) |> set_elem(0, var)
end
def escape({ :__aliases__, _, _ } = module, _vars) do
{ nil, module, nil }
end
def escape(string, _vars) when is_binary(string) do
{ nil, string, nil }
end
def escape(dot, vars) do
case BuilderUtil.escape_dot(dot, vars) do
{ _, _ } = var_field ->
{ [], nil, var_field }
:error ->
raise Ecto.QueryError, reason: "malformed `join` query expression"
end
end
@doc """
Builds a quoted expression.
The quoted expression should evaluate to a query at runtime.
If possible, it does all calculations at compile time to avoid
runtime work.
"""
@spec build(Macro.t, atom, [Macro.t], Macro.t, Macro.t, Macro.Env.t) :: Macro.t
def build(query, qual, binding, expr, on, env) do
binding = BuilderUtil.escape_binding(binding)
{ join_bind, join_expr, join_assoc } = escape(expr, binding)
is_assoc? = not nil?(join_assoc)
validate_qual(qual)
validate_on(on, is_assoc?)
validate_bind(join_bind, binding)
# Define the variable that will be used to calculate the number of binds.
# If the variable is known at compile time, calculate it now.
query = Macro.expand(query, env)
{ query, getter, setter } = count_binds(query)
join_on = escape_on(on, binding ++ List.wrap(join_bind), { join_bind, getter }, env)
join =
quote do
JoinExpr[qual: unquote(qual), source: unquote(join_expr), on: unquote(join_on),
file: unquote(env.file), line: unquote(env.line), assoc: unquote(join_assoc)]
end
case query do
Query[joins: joins] ->
query.joins(joins ++ [join]) |> BuilderUtil.escape_query
_ ->
quote do
Query[joins: joins] = query = Ecto.Queryable.to_query(unquote(query))
unquote(setter)
query.joins(joins ++ [unquote(join)])
end
end
end
defp escape_on(nil, _binding, _join_var, _env), do: nil
defp escape_on(on, binding, join_var, env) do
on = BuilderUtil.escape(on, binding, join_var)
quote do: QueryExpr[expr: unquote(on), line: unquote(env.line), file: unquote(env.file)]
end
defp count_binds(query) do
case BuilderUtil.unescape_query(query) do
# We have the query, calculate the count binds.
Query[] = unescaped ->
{ unescaped, BuilderUtil.count_binds(unescaped), nil }
# We don't have the query, handle it at runtime.
_ ->
{ query,
quote(do: var!(count_binds, Ecto.Query)),
quote(do: var!(count_binds, Ecto.Query) = BuilderUtil.count_binds(query)) }
end
end
@qualifiers [:inner, :left, :right, :full]
defp validate_qual(qual) when qual in @qualifiers, do: :ok
defp validate_qual(_qual) do
raise Ecto.QueryError,
reason: "invalid join qualifier, accepted qualifiers are: " <>
Enum.map_join(@qualifiers, ", ", &"`#{inspect &1}`")
end
defp validate_on(nil, false) do
raise Ecto.QueryError,
reason: "`join` expression requires explicit `on` " <>
"expression unless it's an association join expression"
end
defp validate_on(_on, _is_assoc?), do: :ok
defp validate_bind(bind, all) do
if bind && bind in all do
raise Ecto.QueryError, reason: "variable `#{bind}` is already defined in query"
end
end
end
|
lib/ecto/query/join_builder.ex
| 0.855776
| 0.457258
|
join_builder.ex
|
starcoder
|
defmodule Elixoids.Collision.Server do
@moduledoc """
Simplistic collision detections.
Runs as a separate process to avoid slowing game loop in busy screens.
Tests everything against everything else - no bounding boxes or culling.
A bullet may take out multiple ships, or multiple asteroids, but not both a ship and an asteroid
"""
use GenServer
alias Elixoids.Game.Snapshot
import Elixoids.Const, only: [saucer_tag: 0]
import Elixoids.Event
@saucer_tag saucer_tag()
def start_link(game_id) when is_integer(game_id) do
GenServer.start_link(__MODULE__, game_id, name: via(game_id))
end
def collision_tests(game_id, game) do
GenServer.cast(via(game_id), {:collision_tests, game})
end
defp via(game_id), do: {:via, Registry, {Registry.Elixoids.Collisions, {game_id}}}
# GenServer
def init(args), do: {:ok, args}
# Collisions
def handle_cast({:collision_tests, game}, game_id) do
%Snapshot{asteroids: asteroids, bullets: bullets, ships: ships} = game
_events = collision_check(asteroids, bullets, ships, game_id)
{:noreply, game_id}
end
def collision_check(asteroids, bullets, ships, game_id \\ -1) do
tally = [a: MapSet.new(asteroids), b: MapSet.new(bullets), s: MapSet.new(ships), hits: []]
[_, _, _, hits: events] =
tally
|> check_bullets_hit_asteroids(game_id)
|> check_bullets_hit_ships(game_id)
|> check_asteroids_hit_ships(game_id)
|> check_saucer_hit_ships(game_id)
events
end
defp check_bullets_hit_asteroids(tally = [a: as, b: bs, s: _, hits: _], game_id) do
hits =
for b <- bs,
a <- as,
bullet_hits_asteroid?(b, a),
do: dispatch({:bullet_hit_asteroid, b, a, game_id})
Enum.reduce(hits, tally, fn hit = {_, b, a, _}, [a: as, b: bs, s: ss, hits: hits] ->
[a: MapSet.delete(as, a), b: MapSet.delete(bs, b), s: ss, hits: [hit | hits]]
end)
end
defp check_bullets_hit_ships(tally = [a: _, b: bs, s: ss, hits: _], game_id) do
hits =
for b <- bs,
s <- ss,
bullet_hits_ship?(b, s),
do: dispatch({:bullet_hit_ship, b, s, game_id})
Enum.reduce(hits, tally, fn hit = {_, b, s, _}, [a: as, b: bs, s: ss, hits: hits] ->
[a: as, b: MapSet.delete(bs, b), s: MapSet.delete(ss, s), hits: [hit | hits]]
end)
end
defp check_asteroids_hit_ships(tally = [a: as, b: _, s: ss, hits: _], game_id) do
hits =
for a <- as,
s <- ss,
asteroid_hits_ship?(a, s),
do: dispatch({:asteroid_hit_ship, a, s, game_id})
Enum.reduce(hits, tally, fn hit = {_, a, s, _}, [a: as, b: bs, s: ss, hits: hits] ->
[a: MapSet.delete(as, a), b: bs, s: MapSet.delete(ss, s), hits: [hit | hits]]
end)
end
defp check_saucer_hit_ships(tally = [a: _, b: _, s: ss, hits: _], game_id) do
case ships_tagged_saucer(ss) do
{[], _} ->
tally
{[saucer], ships} ->
hits =
for s <- ships,
ship_hits_ship?(s, saucer),
do: dispatch({:ship_hit_ship, saucer, s, game_id})
Keyword.update(tally, :hits, [], &(&1 ++ hits))
end
end
defp ships_tagged_saucer(ss), do: Enum.split_with(ss, fn %{tag: tag} -> tag == @saucer_tag end)
defp dispatch({_, %{pid: nil}, _, _} = event), do: event
defp dispatch({:bullet_hit_ship, _b, _s, _game_id} = event) do
spawn(fn -> bullet_hit_ship(event) end)
event
end
defp dispatch({:bullet_hit_asteroid, _b, _a, _game_id} = event) do
spawn(fn -> bullet_hit_asteroid(event) end)
event
end
defp dispatch({:asteroid_hit_ship, _a, _s, _game_id} = event) do
spawn(fn -> asteroid_hit_ship(event) end)
event
end
defp dispatch({:ship_hit_ship, _s1, _s2, _game_id} = event) do
spawn(fn -> ship_hit_ship(event) end)
event
end
@doc """
Square a number.
"""
defmacro sq(n) do
quote do
unquote(n) * unquote(n)
end
end
@doc """
Bullet can either be newly spawned (a point), or moving (a line segment).
"""
def bullet_hits_ship?(%{pos: [_, _] = line}, ship),
do: line_segment_intersects_circle?(line, ship)
def bullet_hits_ship?(%{pos: [bullet]}, ship), do: point_inside_circle?(bullet, ship)
def bullet_hits_asteroid?(%{pos: [_, _] = line}, asteroid),
do: line_segment_intersects_circle?(line, asteroid)
def bullet_hits_asteroid?(%{pos: [bullet]}, asteroid),
do: point_inside_circle?(bullet, asteroid)
@doc """
Test if two circles touch or overlap by comparing
distances between their centres
"""
def asteroid_hits_ship?(asteroid, ship) do
%{pos: %{x: ax, y: ay}, radius: ar} = asteroid
%{pos: %{x: sx, y: sy}, radius: sr} = ship
sq(ax - sx) + sq(ay - sy) <= sq(sr + ar)
end
def ship_hits_ship?(s1, s2) do
%{pos: %{x: s1x, y: s1y}, radius: s1r} = s1
%{pos: %{x: s2x, y: s2y}, radius: s2r} = s2
sq(s2x - s1x) + sq(s2y - s1y) < sq(s1r + s2r)
end
def point_inside_circle?(%{x: px, y: py}, %{pos: %{x: cx, y: cy}, radius: r}) do
sq(px - cx) + sq(py - cy) < sq(r)
end
# https://stackoverflow.com/a/1084899/3366
def line_segment_intersects_circle?(
[%{x: ex, y: ey} = p1, %{x: lx, y: ly} = p2],
%{
pos: %{x: cx, y: cy},
radius: r
} = o
) do
d = {lx - ex, ly - ey}
f = {ex - cx, ey - cy}
a = dot(d, d)
b = 2 * dot(f, d)
c = dot(f, f) - r * r
discriminant = b * b - 4 * a * c
if discriminant < 0 do
false
else
discriminant = :math.sqrt(discriminant)
t1 = (-b - discriminant) / (2 * a)
t2 = (-b + discriminant) / (2 * a)
cond do
t1 >= 0 && t1 <= 1 -> true
t2 >= 0 && t2 <= 1 -> true
true -> point_inside_circle?(p1, o) || point_inside_circle?(p2, o)
end
end
end
defp dot({a1, a2}, {b1, b2}), do: a1 * b1 + a2 * b2
end
|
lib/elixoids/collision/server.ex
| 0.8339
| 0.537466
|
server.ex
|
starcoder
|
defmodule RemoteIp.Headers do
@moduledoc """
Functions for parsing IPs from multiple types of forwarding headers.
"""
@doc """
Extracts all headers with the given names.
Note that `Plug.Conn` headers are assumed to have been normalized to
lowercase, so the names you give should be in lowercase as well.
## Examples
iex> [{"x-foo", "foo"}, {"x-bar", "bar"}, {"x-baz", "baz"}]
...> |> RemoteIp.Headers.take(["x-foo", "x-baz", "x-qux"])
[{"x-foo", "foo"}, {"x-baz", "baz"}]
iex> [{"x-dup", "foo"}, {"x-dup", "bar"}, {"x-dup", "baz"}]
...> |> RemoteIp.Headers.take(["x-dup"])
[{"x-dup", "foo"}, {"x-dup", "bar"}, {"x-dup", "baz"}]
"""
@spec take(Plug.Conn.headers(), [binary()]) :: Plug.Conn.headers()
def take(headers, names) do
Enum.filter(headers, fn {name, _} -> name in names end)
end
@doc """
Parses IP addresses out of the given headers.
For each header name/value pair, the value is parsed for zero or more IP
addresses by the parser corresponding to the name. If no such parser exists
in the given map, we fall back to `RemoteIp.Parsers.Generic`.
The IPs are concatenated together into a single flat list. Note that the
relative order is preserved. That is, each header produce multiple IPs that
are kept in the order given by that specific header. Then, in the case of
multiple headers, the concatenated list maintains the same order as the
headers appeared in the original name/value list.
Due to the error-safe nature of the `RemoteIp.Parser` behaviour, headers that
do not actually contain valid IP addresses should be safely ignored.
## Examples
iex> [{"x-one", "1.2.3.4, 2.3.4.5"}, {"x-two", "3.4.5.6, 4.5.6.7"}]
...> |> RemoteIp.Headers.parse()
[{1, 2, 3, 4}, {2, 3, 4, 5}, {3, 4, 5, 6}, {4, 5, 6, 7}]
iex> [{"forwarded", "for=1.2.3.4"}, {"x-forwarded-for", "2.3.4.5"}]
...> |> RemoteIp.Headers.parse()
[{1, 2, 3, 4}, {2, 3, 4, 5}]
iex> [{"accept", "*/*"}, {"user-agent", "ua"}, {"x-real-ip", "1.2.3.4"}]
...> |> RemoteIp.Headers.parse()
[{1, 2, 3, 4}]
"""
@spec parse(Plug.Conn.headers(), %{binary() => RemoteIp.Parser.t()}) :: [
:inet.ip_address()
]
def parse(headers, parsers \\ RemoteIp.Options.default(:parsers)) do
Enum.flat_map(headers, fn {name, value} ->
parser = Map.get(parsers, name, RemoteIp.Parsers.Generic)
parser.parse(value)
end)
end
end
|
lib/remote_ip/headers.ex
| 0.913907
| 0.547162
|
headers.ex
|
starcoder
|
defmodule ComplexNumber do
@moduledoc """
Functions for complex number operations.
"""
@pi :math.pi()
@type t :: number | %ComplexNumber{radius: number, theta: number}
defstruct [:radius, :theta]
@doc """
Checks if the argument is a complex (including real) number or not.
iex> ComplexNumber.is_complex_number(6.85)
true
iex> ComplexNumber.is_complex_number(-3)
true
iex> ComplexNumber.is_complex_number(ComplexNumber.new(3.5, -1))
true
iex> ComplexNumber.is_complex_number(:atom)
false
iex> ComplexNumber.is_complex_number("binary")
false
"""
if Version.match?(System.version(), "< 1.11.0") do
defmacrop is_struct(term, _) do
case __CALLER__.context do
nil ->
quote do
case unquote(term) do
%{__struct__: ComplexNumber} -> true
_ -> false
end
end
:match ->
raise ArgumentError,
"invalid expression in match, is_struct/2 is not allowed in patterns such as " <>
"function clauses, case clauses or on the left side of the = operator"
:guard ->
quote do
is_map(unquote(term)) and
:erlang.is_map_key(:__struct__, unquote(term)) and
:erlang.map_get(:__struct__, unquote(term)) == ComplexNumber
end
end
end
end
defguard is_complex_number(number) when is_number(number) or is_struct(number, ComplexNumber)
@doc """
Creates a new complex number from a real part and an imaginary part.
If the imaginary part is zero, it just returns a real number.
iex> ComplexNumber.new(3, 4)
%ComplexNumber{radius: 5.0, theta: 0.9272952180016122}
iex> ComplexNumber.new(-3, 4)
%ComplexNumber{radius: 5.0, theta: 2.214297435588181}
iex> ComplexNumber.new(3, 0)
3
"""
@spec new(number, number) :: t
def new(real, imaginary) when is_number(real) and imaginary == 0, do: real
def new(real, imaginary) when is_number(real) and is_number(imaginary) do
%ComplexNumber{
radius: :math.sqrt(real * real + imaginary * imaginary),
theta: :math.atan2(imaginary, real)
}
end
@doc """
Returns the real part of the given complex number.
iex> ComplexNumber.real(ComplexNumber.new(6.2, 3))
6.2
iex> ComplexNumber.real(4)
4
"""
@spec real(t) :: number
def real(number) when is_number(number), do: number
def real(%ComplexNumber{radius: radius, theta: theta}), do: radius * :math.cos(theta)
@doc """
Returns the imaginary part of the given complex number.
iex> ComplexNumber.imaginary(ComplexNumber.new(6.2, 3))
3.0
iex> ComplexNumber.imaginary(4)
0
"""
@spec imaginary(t) :: number
def imaginary(number) when is_number(number), do: 0
def imaginary(%ComplexNumber{radius: radius, theta: theta}), do: radius * :math.sin(theta)
@doc """
Returns the absolute value of the given complex number.
iex> ComplexNumber.abs(ComplexNumber.new(4, -3))
5.0
iex> ComplexNumber.abs(4.2)
4.2
"""
@spec abs(t) :: number
def abs(number) when is_number(number), do: Kernel.abs(number)
def abs(%ComplexNumber{radius: radius}), do: Kernel.abs(radius)
@doc """
Negates a complex number.
iex> ComplexNumber.negate(ComplexNumber.new(4, -3))
%ComplexNumber{radius: -5.0, theta: -0.6435011087932844}
iex> ComplexNumber.negate(4.2)
-4.2
"""
@spec negate(t) :: t
def negate(number) when is_number(number), do: -number
def negate(%ComplexNumber{radius: radius} = number), do: %{number | radius: -radius}
@doc """
Adds two complex numbers.
iex> ComplexNumber.add(ComplexNumber.new(0.5, 2.5), ComplexNumber.new(2.5, 1.5))
%ComplexNumber{radius: 5.0, theta: 0.9272952180016122}
iex> ComplexNumber.add(ComplexNumber.new(0.5, 4), 2.5)
%ComplexNumber{radius: 5.0, theta: 0.9272952180016121}
iex> ComplexNumber.add(2.5, ComplexNumber.new(0.5, 4))
%ComplexNumber{radius: 5.0, theta: 0.9272952180016121}
iex> ComplexNumber.add(3.5, 2.5)
6.0
"""
@spec add(t, t) :: t
def add(number1, number2) when is_number(number1) and is_number(number2), do: number1 + number2
def add(number, %ComplexNumber{radius: radius, theta: theta}) when is_number(number) do
new(radius * :math.cos(theta) + number, radius * :math.sin(theta))
end
def add(%ComplexNumber{radius: radius, theta: theta}, number) when is_number(number) do
new(radius * :math.cos(theta) + number, radius * :math.sin(theta))
end
def add(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
) do
new(
radius1 * :math.cos(theta1) + radius2 * :math.cos(theta2),
radius1 * :math.sin(theta1) + radius2 * :math.sin(theta2)
)
end
@doc """
Subtracts a complex number from another one.
iex> ComplexNumber.subtract(ComplexNumber.new(0.5, 2.5), ComplexNumber.new(2.5, 1.5))
%ComplexNumber{radius: 2.2360679774997894, theta: 2.6779450445889874}
iex> ComplexNumber.subtract(ComplexNumber.new(0.5, 4), 2.5)
%ComplexNumber{radius: 4.472135954999579, theta: 2.0344439357957027}
iex> ComplexNumber.subtract(2.5, ComplexNumber.new(0.5, 4))
%ComplexNumber{radius: 4.472135954999579, theta: 1.1071487177940906}
iex> ComplexNumber.subtract(3.5, 2.5)
1.0
"""
@spec subtract(t, t) :: t
def subtract(number1, number2) when is_number(number1) and is_number(number2) do
number1 - number2
end
def subtract(number, %ComplexNumber{radius: radius, theta: theta}) when is_number(number) do
new(number - radius * :math.cos(theta), radius * :math.sin(theta))
end
def subtract(%ComplexNumber{radius: radius, theta: theta}, number) when is_number(number) do
new(radius * :math.cos(theta) - number, radius * :math.sin(theta))
end
def subtract(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
) do
new(
radius1 * :math.cos(theta1) - radius2 * :math.cos(theta2),
radius1 * :math.sin(theta1) - radius2 * :math.sin(theta2)
)
end
@doc """
Makes a product of two complex numbers.
iex> ComplexNumber.multiply(ComplexNumber.new(2, -3), ComplexNumber.new(-3, 0.5))
%ComplexNumber{radius: 10.965856099730653, theta: 1.993650252927837}
iex> ComplexNumber.multiply(ComplexNumber.new(2, -3), ComplexNumber.new(3, 4.5))
19.5
iex> ComplexNumber.multiply(ComplexNumber.new(2, 3), ComplexNumber.new(-3, 4.5))
-19.5
iex> ComplexNumber.multiply(2.5, ComplexNumber.new(3, -0.5))
%ComplexNumber{radius: 7.603453162872774, theta: -0.16514867741462683}
iex> ComplexNumber.multiply(ComplexNumber.new(3, -0.5), 2.5)
%ComplexNumber{radius: 7.603453162872774, theta: -0.16514867741462683}
iex> ComplexNumber.multiply(4, 2.5)
10.0
"""
@spec multiply(t, t) :: t
def multiply(number1, number2) when is_number(number1) and is_number(number2) do
number1 * number2
end
def multiply(number1, %ComplexNumber{radius: radius} = number2) when is_number(number1) do
%{number2 | radius: radius * number1}
end
def multiply(%ComplexNumber{radius: radius} = number2, number1) when is_number(number1) do
%{number2 | radius: radius * number1}
end
def multiply(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
)
when trunc((theta1 + theta2) * 0.5 / @pi) == (theta1 + theta2) * 0.5 / @pi do
radius1 * radius2
end
def multiply(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
)
when trunc((theta1 + theta2) / @pi) == (theta1 + theta2) / @pi do
-radius1 * radius2
end
def multiply(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
) do
%ComplexNumber{radius: radius1 * radius2, theta: theta1 + theta2}
end
@doc """
Divides a complex number by another one.
iex> ComplexNumber.divide(ComplexNumber.new(3, -0.5), ComplexNumber.new(2, 1.5))
%ComplexNumber{radius: 1.2165525060596438, theta: -0.8086497862079112}
iex> ComplexNumber.divide(ComplexNumber.new(3, -0.75), ComplexNumber.new(2, -0.5))
1.5
iex> ComplexNumber.divide(ComplexNumber.new(-3, -0.75), ComplexNumber.new(2, 0.5))
-1.5
iex> ComplexNumber.divide(3, ComplexNumber.new(2, 1.5))
%ComplexNumber{radius: 1.2, theta: -0.6435011087932844}
iex> ComplexNumber.divide(ComplexNumber.new(3, -0.5), 2)
%ComplexNumber{radius: 1.5206906325745548, theta: -0.16514867741462683}
iex> ComplexNumber.divide(3, 2)
1.5
"""
@spec divide(t, t) :: t
def divide(number1, number2) when is_number(number1) and is_number(number2) do
number1 / number2
end
def divide(number, %ComplexNumber{radius: radius, theta: theta}) when is_number(number) do
%ComplexNumber{radius: number / radius, theta: -theta}
end
def divide(%ComplexNumber{radius: radius} = number1, number2) when is_number(number2) do
%{number1 | radius: radius / number2}
end
def divide(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
)
when trunc((theta1 - theta2) * 0.5 / @pi) == (theta1 - theta2) * 0.5 / @pi do
radius1 / radius2
end
def divide(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
)
when trunc((theta1 - theta2) / @pi) == (theta1 - theta2) / @pi do
-radius1 / radius2
end
def divide(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
) do
%ComplexNumber{radius: radius1 / radius2, theta: theta1 - theta2}
end
@doc """
Returns a multivalued function representing the given base taken to the power of the given
exponent.
iex> ComplexNumber.pow(ComplexNumber.new(6, 1.5), ComplexNumber.new(-4, -0.4)).(0)
%ComplexNumber{radius: 0.0007538662030076445, theta: -1.708743364561965}
iex> ComplexNumber.pow(6.5, ComplexNumber.new(-4, -0.4)).(0)
%ComplexNumber{radius: 0.0005602044746332418, theta: -0.7487208707606361}
iex> ComplexNumber.pow(ComplexNumber.new(6, 1.5), -4.4).(0)
%ComplexNumber{radius: 0.0003297697637520032, theta: -1.0779061177582023}
iex> ComplexNumber.pow(6.5, -4.4).(0)
0.0002649605586423526
iex> ComplexNumber.pow(6.5, -4.4).(1)
%ComplexNumber{radius: 0.00026496055864235266, theta: -2.5132741228718367}
iex> ComplexNumber.pow(6.5, 0.5).(1)
-2.5495097567963922
"""
@spec pow(t, t) :: (integer -> t)
def pow(number1, number2) when is_number(number1) and is_integer(number2) do
fn n when is_integer(n) -> :math.pow(number1, number2) end
end
def pow(number1, number2) when is_number(number1) and number1 < 0 and is_float(number2) do
fn
n when is_integer(n) and trunc((n + 0.5) * number2) == (n + 0.5) * number2 ->
:math.pow(number1, number2)
n when is_integer(n) and trunc((n * 2 + 1) * number2) == (n * 2 + 1) * number2 ->
-:math.pow(number1, number2)
n when is_integer(n) ->
%ComplexNumber{
radius: :math.exp(number2 * :math.log(-number1)),
theta: ((n + 0.5) * number2 - trunc((n + 0.5) * number2)) * 2 * @pi
}
end
end
def pow(number1, number2) when is_number(number1) and is_float(number2) do
fn
n when is_integer(n) and trunc(n * number2) == n * number2 ->
:math.pow(number1, number2)
n when is_integer(n) and trunc(n * 2 * number2) == n * 2 * number2 ->
-:math.pow(number1, number2)
n when is_integer(n) ->
%ComplexNumber{
radius: :math.exp(number2 * :math.log(number1)),
theta: (n * number2 - trunc(n * number2)) * 2 * @pi
}
end
end
def pow(number, %ComplexNumber{radius: radius, theta: theta})
when is_number(number) and number < 0 do
x = radius * :math.cos(theta)
y = radius * :math.sin(theta)
log = :math.log(-number)
fn
n
when is_integer(n) and
trunc((n + 0.5) * x + y * log * 0.5 / @pi) == (n + 0.5) * x + y * log * 0.5 / @pi ->
:math.exp(x * log - (2 * n + 1) * @pi * y)
n
when is_integer(n) and
trunc((2 * n + 1) * x + y * log / @pi) == (2 * n + 1) * x + y * log / @pi ->
-:math.exp(x * log - (2 * n + 1) * @pi * y)
n when is_integer(n) ->
%ComplexNumber{
radius: :math.exp(x * log - (2 * n + 1) * @pi * y),
theta: (2 * n + 1) * @pi * x + y * log
}
end
end
def pow(number, %ComplexNumber{radius: radius, theta: theta}) when is_number(number) do
x = radius * :math.cos(theta)
y = radius * :math.sin(theta)
log = :math.log(number)
fn
n
when is_integer(n) and trunc(n * x + y * log * 0.5 / @pi) == n * x + y * log * 0.5 / @pi ->
:math.exp(x * log - 2 * @pi * n * y)
n when is_integer(n) and trunc(2 * n * x + y * log / @pi) == 2 * n * x + y * log / @pi ->
-:math.exp(x * log - 2 * @pi * n * y)
n when is_integer(n) ->
%ComplexNumber{
radius: :math.exp(x * log - 2 * @pi * n * y),
theta: 2 * @pi * n * x + y * log
}
end
end
def pow(%ComplexNumber{radius: radius, theta: theta}, number)
when is_number(number) and radius < 0 do
fn
n
when is_integer(n) and
trunc(((theta / @pi + 1) * 0.5 + n) * number) ==
((theta / @pi + 1) * 0.5 + n) * number ->
:math.pow(-radius, number)
n
when is_integer(n) and
trunc((theta / @pi + n * 2 + 1) * number) == (theta / @pi + n * 2 + 1) * number ->
-:math.pow(-radius, number)
n when is_integer(n) ->
%ComplexNumber{
radius: :math.pow(-radius, number),
theta: (theta + (2 * n + 1) * @pi) * number
}
end
end
def pow(%ComplexNumber{radius: radius, theta: theta}, number) when is_number(number) do
fn
n
when is_integer(n) and
trunc((theta * 0.5 / @pi + n) * number) == (theta * 0.5 / @pi + n) * number ->
:math.pow(radius, number)
n
when is_integer(n) and
trunc((theta / @pi + n * 2) * number) == (theta / @pi + n * 2) * number ->
-:math.pow(radius, number)
n when is_integer(n) ->
%ComplexNumber{radius: :math.pow(radius, number), theta: (theta + 2 * n * @pi) * number}
end
end
def pow(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
)
when radius1 < 0 do
log_r = :math.log(-radius1)
x2 = radius2 * :math.cos(theta2)
y2 = radius2 * :math.sin(theta2)
p = x2 * theta1 + y2 * log_r
fn
n
when is_integer(n) and
trunc((n + 0.5) * x2 + p * 0.5 / @pi) == (n + 0.5) * x2 + p * 0.5 / @pi ->
:math.exp(x2 * log_r - y2 * (theta1 + 2 * n * @pi))
n when is_integer(n) and trunc((n * 2 + 1) * x2 + p / @pi) == (n * 2 + 1) * x2 + p / @pi ->
-:math.exp(x2 * log_r - y2 * (theta1 + 2 * n * @pi))
n when is_integer(n) ->
%ComplexNumber{
radius: :math.exp(x2 * log_r - y2 * ((2 * n + 1) * @pi + theta1)),
theta: (2 * n + 1) * @pi * x2 + p
}
end
end
def pow(
%ComplexNumber{radius: radius1, theta: theta1},
%ComplexNumber{radius: radius2, theta: theta2}
) do
log_r = :math.log(radius1)
x2 = radius2 * :math.cos(theta2)
y2 = radius2 * :math.sin(theta2)
p = x2 * theta1 + y2 * log_r
fn
n when is_integer(n) and trunc(n * x2 + p * 0.5 / @pi) == n * x2 + p * 0.5 / @pi ->
:math.exp(x2 * log_r - y2 * (theta1 + 2 * n * @pi))
n when is_integer(n) and trunc(n * x2 * 2 + p / @pi) == n * x2 * 2 + p / @pi ->
-:math.exp(x2 * log_r - y2 * (theta1 + 2 * n * @pi))
n when is_integer(n) ->
%ComplexNumber{
radius: :math.exp(x2 * log_r - y2 * (2 * n * @pi + theta1)),
theta: 2 * n * @pi * x2 + p
}
end
end
@doc """
Returns the cosine of a complex number.
iex> ComplexNumber.cos(2.1)
-0.5048461045998576
iex> ComplexNumber.cos(ComplexNumber.new(3, -0.5))
%ComplexNumber{radius: 1.1187606807234534, theta: 3.075814483757404}
"""
@spec cos(t) :: t
def cos(number) when is_number(number), do: :math.cos(number)
def cos(%ComplexNumber{radius: radius, theta: theta}) do
x = radius * :math.cos(theta)
y = radius * :math.sin(theta)
new(:math.cos(x) * :math.cosh(y), -:math.sin(x) * :math.sinh(y))
end
@doc """
Returns the sine of a complex number.
iex> ComplexNumber.sin(2.1)
0.8632093666488737
iex> ComplexNumber.sin(ComplexNumber.new(3, -0.5))
%ComplexNumber{radius: 0.5398658852737769, theta: 1.2715925251688622}
"""
@spec sin(t) :: t
def sin(number) when is_number(number), do: :math.sin(number)
def sin(%ComplexNumber{radius: radius, theta: theta}) do
x = radius * :math.cos(theta)
y = radius * :math.sin(theta)
new(:math.sin(x) * :math.cosh(y), :math.cos(x) * :math.sinh(y))
end
@doc """
Returns the tangent of a complex number.
iex> ComplexNumber.tan(2.1)
-1.7098465429045073
iex> ComplexNumber.tan(ComplexNumber.new(3, -0.5))
%ComplexNumber{radius: 0.482557078181072, theta: -1.804221958588542}
"""
@spec tan(t) :: t
def tan(number) when is_number(number), do: :math.tan(number)
def tan(%ComplexNumber{radius: radius, theta: theta}) do
x = radius * :math.cos(theta)
y = radius * :math.sin(theta)
denominator = :math.cos(x * 2) + :math.cosh(y * 2)
new(:math.sin(x * 2) / denominator, :math.sinh(y * 2) / denominator)
end
end
|
lib/complex_number.ex
| 0.932699
| 0.637299
|
complex_number.ex
|
starcoder
|
defmodule ExPng.Image do
@moduledoc """
The primary API module for `ExPng`, `ExPng.Image` provides functions for
reading, editing, and saving images.
"""
alias ExPng.Image.{Decoding, Drawing, Encoding}
alias ExPng.{Color, RawData}
@type row :: [Color.t(), ...]
@type canvas :: [row, ...]
@type t :: %__MODULE__{
pixels: ExPng.maybe(canvas),
raw_data: ExPng.maybe(RawData.t()),
height: pos_integer(),
width: pos_integer()
}
@type filename :: String.t()
@type success :: {:ok, __MODULE__.t()}
@type error :: {:error, String.t(), filename}
defstruct [
:pixels,
:raw_data,
:height,
:width
]
@doc """
Returns a blank (opaque white) image with the provided width and height
"""
@spec new(pos_integer, pos_integer) :: __MODULE__.t()
def new(width, height) do
%__MODULE__{
width: width,
height: height
}
|> erase()
end
@doc """
Constructs a new image from the provided 2-dimensional list of pixels
"""
@spec new(canvas) :: __MODULE__.t()
def new(pixels) do
%__MODULE__{
pixels: pixels,
width: length(Enum.at(pixels, 0)),
height: length(pixels)
}
end
@doc """
Attempts to decode a PNG file into an `ExPng.Image` and returns a success
tuple `{:ok, image}` or an error tuple explaining the encountered error.
ExPng.Image.from_file("adorable_kittens.png")
{:ok, %ExPng.Image{ ... }
ExPng.Image.from_file("doesnt_exist.png")
{:error, :enoent, "doesnt_exist.png"}
"""
@spec from_file(filename) :: success | error
def from_file(filename) do
case ExPng.RawData.from_file(filename) do
{:ok, raw_data} -> {:ok, Decoding.from_raw_data(raw_data)}
error -> error
end
end
@doc """
Attempts to decode PNG binary data into an `ExPng.Image` and returns a success
tuple `{:ok, image}` or an error tuple explaining the encountered error.
File.read!("test/png_suite/basic/basi2c16.png") |> ExPng.Image.from_binary()
{:ok, %ExPng.Image{ ... }
ExPng.Image.from_binary("bad data")
{:error, "malformed PNG signature", "bad data"}
"""
@spec from_binary(binary) :: success | error
def from_binary(binary_data) do
case ExPng.RawData.from_binary(binary_data) do
{:ok, raw_data} -> {:ok, Decoding.from_raw_data(raw_data)}
error -> error
end
end
@doc """
Writes the `image` to disk at `filename` using the provided
`encoding_options`.
Encoding options can be:
* interlace: whether or not the image in encoding with Adam7 interlacing.
* defaults to `false`
* filter: the filtering algorithm to use. Can be one of `ExPng.Image.Filtering.{none, sub, up, average, paeth}`
* defaults to `up`
* compression: the compression level for the zlib compression algorithm to use. Can be an integer between 0 (no compression) and 9 (max compression)
* defaults to 6
"""
@spec to_file(__MODULE__.t(), filename, ExPng.maybe(keyword)) :: {:ok, filename}
def to_file(%__MODULE__{} = image, filename, encoding_options \\ []) do
with {:ok, raw_data} <- Encoding.to_raw_data(image, encoding_options) do
RawData.to_file(raw_data, filename, encoding_options)
{:ok, filename}
end
end
@doc """
Computes the png binary data using the provided
`encoding_options`.
Encoding options can be:
* interlace: whether or not the image is encoding with Adam7 interlacing.
* defaults to `false`
* filter: the filtering algorithm to use. Can be one of `ExPng.Image.Filtering.{none, sub, up, average, paeth}`
* defaults to `up`
* compression: the compression level for the zlib compression algorithm to use. Can be an integer between 0 (no compression) and 9 (max compression)
* defaults to 6
"""
@spec to_binary(__MODULE__.t(), ExPng.maybe(keyword)) :: {:ok, binary}
def to_binary(%__MODULE__{} = image, encoding_options \\ []) do
with {:ok, raw_data} <- Encoding.to_raw_data(image, encoding_options) do
png_binary = RawData.to_binary(raw_data, encoding_options)
{:ok, png_binary}
end
end
@doc """
Returns a list of unique pixels values used in `image`.
"""
@spec unique_pixels(__MODULE__.t()) :: [Color.t()]
def unique_pixels(%__MODULE__{pixels: pixels}) do
pixels
|> List.flatten()
|> Enum.uniq()
|> Enum.sort_by(fn <<_, _, _, a>> -> a end)
end
defdelegate erase(image), to: Drawing
defdelegate draw(image, coordinates, color), to: Drawing
defdelegate at(image, coordinates), to: Drawing
defdelegate clear(image, coordinates), to: Drawing
defdelegate line(image, coordinates0, coordinates1, color \\ ExPng.Color.black()), to: Drawing
@behaviour Access
@impl true
def fetch(%__MODULE__{} = image, {x, y}) do
case x < image.width && y < image.height do
true ->
pixel =
image.pixels
|> Enum.at(y)
|> Enum.at(x)
{:ok, pixel}
false ->
:error
end
end
@impl true
def get_and_update(%__MODULE__{} = image, {x, y}, func) do
case fetch(image, {x, y}) do
{:ok, pixel} ->
{_, new_pixel} = func.(pixel)
row =
image.pixels
|> Enum.at(round(y))
|> List.replace_at(round(x), new_pixel)
pixels = List.replace_at(image.pixels, round(y), row)
{pixel, %{image | pixels: pixels}}
:error ->
{nil, image}
end
end
@impl true
def pop(%__MODULE__{} = image, {x, y}) do
{nil, update_in(image, [{x, y}], fn _ -> ExPng.Color.white() end)}
end
end
defimpl Inspect, for: ExPng.Image do
use Bitwise
def inspect(%ExPng.Image{pixels: pixels}, _opts) do
for line <- pixels do
Enum.map(line, fn <<r, g, b, a>> ->
pixel =
((r <<< 24) + (g <<< 16) + (b <<< 8) + a)
|> Integer.to_string(16)
|> String.downcase()
|> String.pad_leading(8, "0")
"0x" <> pixel
end)
|> Enum.join(" ")
end
|> Enum.join("\n")
end
end
|
lib/ex_png/image.ex
| 0.928668
| 0.610831
|
image.ex
|
starcoder
|
defmodule Samples.FormatterPlugin do
@behaviour Mix.Tasks.Format
@line_break ["\n", "\r\n", "\r"]
def features(_opts) do
[extensions: [".ex", ".exs"]]
end
def format(code, opts) do
formatted_code =
code
|> Code.format_string!(opts)
|> to_string()
|> format_samples()
formatted_code <> "\n"
end
defp format_samples(code, position \\ [line: 0, column: 0]) do
case format_first_samples_from_position(code, position) do
:noop ->
code
{updated_code, position_found} ->
format_samples(updated_code, position_found)
end
end
defp format_first_samples_from_position(code, position) do
samples_zipper =
code
|> Sourceror.parse_string!()
|> Sourceror.Zipper.zip()
|> Sourceror.Zipper.find(fn
{:samples, meta, _} ->
has_do? = Keyword.has_key?(meta, :do)
is_after_position? =
cond do
meta[:line] > position[:line] -> true
meta[:line] == position[:line] and meta[:column] > meta[:column] -> true
true -> false
end
has_do? and is_after_position?
_ ->
false
end)
case find_samples_nodes(samples_zipper) do
{nil, _} ->
:noop
{samples_node, nil} ->
{code, Sourceror.get_start_position(samples_node)}
{samples_node, do_node} ->
range =
do_node
|> Sourceror.get_range()
|> Map.update!(:start, fn pos -> [line: pos[:line] + 1, column: 1] end)
content = get_code_by_range(code, range)
samples_column = Sourceror.get_column(samples_node)
replacement = format_table(content <> "\n", samples_column + 1)
patch = %{
change: replacement,
range: range,
preserve_indentation: false
}
{Sourceror.patch_string(code, [patch]), Sourceror.get_start_position(samples_node)}
end
end
defp find_samples_nodes(nil) do
{nil, nil}
end
defp find_samples_nodes(samples_zipper) do
samples_node = Sourceror.Zipper.node(samples_zipper)
do_node =
samples_zipper
|> Sourceror.Zipper.down()
|> Sourceror.Zipper.rightmost()
|> Sourceror.Zipper.down()
|> Sourceror.Zipper.node()
|> case do
{{:__block__, _, [:do]}, {:__block__, _, []}} -> nil
node -> node
end
{samples_node, do_node}
end
defp format_table(code, column_offset) do
ast = code |> Code.string_to_quoted!(columns: true)
{_, positions} =
Macro.prewalk(ast, [], fn
{:|, meta, _children} = node, acc ->
{node, [{meta[:line], meta[:column]} | acc]}
other, acc ->
{other, acc}
end)
positions = Enum.reverse(positions)
{rows, cols_info} = walk(code, 1, 1, positions, [], {[[]], %{}, 0})
last_col_index = map_size(cols_info) - 1
for row <- rows do
Enum.map_join(row, " | ", fn
{^last_col_index, value} ->
align_value(value, cols_info[last_col_index], true)
{col_index, value} ->
offset = if col_index == 0, do: String.duplicate(" ", column_offset), else: ""
offset <> align_value(value, cols_info[col_index], false)
end)
end
|> Enum.join("\n")
end
defp align_value(value, cols_info, last_col?) do
cond do
cols_info.is_number? ->
String.pad_leading(value, cols_info.width)
last_col? ->
value
true ->
String.pad_trailing(value, cols_info.width)
end
end
defp walk("\r\n" <> rest, line, _column, positions, buffer, acc) do
acc = acc |> add_cell(buffer) |> new_line(positions)
walk(rest, line + 1, 1, positions, [], acc)
end
defp walk("\n" <> rest, line, _column, positions, buffer, acc) do
acc = acc |> add_cell(buffer) |> new_line(positions)
walk(rest, line + 1, 1, positions, [], acc)
end
defp walk(<<_::utf8, rest::binary>>, line, column, [{line, column} | positions], buffer, acc) do
walk(rest, line, column + 1, positions, [], add_cell(acc, buffer))
end
defp walk(<<c::utf8, rest::binary>>, line, column, positions, buffer, acc) do
walk(rest, line, column + 1, positions, [<<c::utf8>> | buffer], acc)
end
defp walk(<<>>, _line, _column, _positions, _buffer, {rows, cols_info, _col_index}) do
{Enum.reverse(rows), cols_info}
end
defp add_cell({[cells | rows], cols_info, col_index}, cell) do
value = cell |> Enum.reverse() |> to_string() |> String.trim()
width = String.length(value)
is_number? = is_number?(value)
info = %{width: width, is_number?: is_number?}
cols_info =
Map.update(cols_info, col_index, info, fn info ->
%{width: max(info.width, width), is_number?: info.is_number? or is_number?}
end)
{[[{col_index, value} | cells] | rows], cols_info, col_index + 1}
end
defp is_number?(value) do
value = String.replace(value, "_", "")
match?({_, ""}, Float.parse(value)) or match?({_, ""}, Integer.parse(value))
end
defp new_line({[cells | rows], cols_info, _col_index}, []) do
{[Enum.reverse(cells) | rows], cols_info, 0}
end
defp new_line({[cells | rows], cols_info, _col_index}, _positions) do
{[[] | [Enum.reverse(cells) | rows]], cols_info, 0}
end
defp get_code_by_range(code, range) do
{_, text_after} = split_at(code, range.start[:line], range.start[:column])
line = range.end[:line] - range.start[:line] + 1
{text, _} = split_at(text_after, line, range.end[:column])
text
end
defp split_at(code, line, col) do
pos = find_position(code, line, col, {0, 1, 1})
String.split_at(code, pos)
end
defp find_position(_text, line, col, {pos, line, col}) do
pos
end
defp find_position(text, line, col, {pos, current_line, current_col}) do
case String.next_grapheme(text) do
{grapheme, rest} ->
{new_pos, new_line, new_col} =
if grapheme in @line_break do
if current_line == line do
# this is the line we're lookin for
# but it's shorter than expected
{pos, current_line, col}
else
{pos + 1, current_line + 1, 1}
end
else
{pos + 1, current_line, current_col + 1}
end
find_position(rest, line, col, {new_pos, new_line, new_col})
nil ->
pos
end
end
end
|
lib/formatter_plugin.ex
| 0.569254
| 0.512693
|
formatter_plugin.ex
|
starcoder
|
defmodule ApiWeb.Plugs.ModifiedSinceHandler do
@moduledoc """
Checks for the `If-Modified-Since` header.
Whenever the header is found, the value is parsed and compared to a the value
returned from an expected state module. If a resource hasn't been updated
since the provided timestamp, a 304 status is given. Otherwise, the lastest
data is fetched.
Refer to:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Modified-Since
## Expected Date Format
The expected format is `Wed, 21 Oct 2015 07:28:00 GMT`.
## Expected Usage
The plug is expected to be used at the controller level where a state module
can be provided/known or in `ApiWeb.ApiControllerHelpers`.
defmodule ApiWeb.ResourceController
use Phoenix.Controller
plug ApiWeb.Plugs.ModifiedSinceHandler, caller: __MODULE__
#...
def state_module, do: State.Resource
end
"""
import Plug.Conn
@doc """
Configures the plug.
## Options
* `:caller` - (Required) Module calling the plug. Module should define
`state_module/0`, which returns a State module.
"""
def init(opts) do
_ = ensure_caller_defined(opts)
opts
end
def call(conn, opts) do
_ = ensure_caller_defined(opts)
_ = ensure_state_module_implemented(opts)
with mod when mod != nil <- state_module(opts),
last_modified_header = State.Metadata.last_modified_header(mod),
conn = Plug.Conn.put_resp_header(conn, "last-modified", last_modified_header),
{conn, [if_modified_since_header]} <- {conn, get_req_header(conn, "if-modified-since")},
{conn, false} <- {conn, is_modified?(last_modified_header, if_modified_since_header)} do
conn
|> send_resp(:not_modified, "")
|> halt()
else
{%Plug.Conn{} = conn, _error} ->
conn
_error ->
conn
end
end
def is_modified?(same, same) do
# shortcut if the headers have the same value
false
end
def is_modified?(first, second) do
with {:ok, first_val} <- modified_value(first),
{:ok, second_val} <- modified_value(second) do
first_val > second_val
else
_ -> true
end
end
defp modified_value(
<<_::binary-5, day::binary-2, " ", month_str::binary-3, " ", year::binary-4, " ",
time::binary-8, " GMT">>
) do
{:ok, month} = month_val(month_str)
{:ok, {year, month, day, time}}
end
defp modified_value(_) do
:error
end
for {month_str, index} <- Enum.with_index(~w(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec)) do
defp month_val(unquote(month_str)), do: {:ok, unquote(index)}
end
defp month_val(_), do: :error
defp state_module(opts) do
opts[:caller].state_module()
end
if Application.get_env(:api_web, __MODULE__)[:check_caller] do
defp ensure_caller_defined(opts) do
unless opts[:caller] do
raise ArgumentError, "expected `:caller` to be provided with module"
end
end
defp ensure_state_module_implemented(opts) do
unless opts[:caller].module_info(:exports)[:state_module] == 0 do
raise ArgumentError,
"expected `:caller` to implement " <>
"`state_module/0` and return a module from the " <> "`State` namespace"
end
end
else
defp ensure_caller_defined(_), do: :ok
defp ensure_state_module_implemented(_), do: :ok
end
end
|
apps/api_web/lib/api_web/plugs/modified_since_handler.ex
| 0.823435
| 0.534248
|
modified_since_handler.ex
|
starcoder
|
defmodule FTTZ do
@hours 0..23
@daytime 8..20
def stats(screen_name, scale \\ :log10) do
data = screen_name |> times
data |> timezone
IO.puts "\nTweet distribution throughout the day (at UTC±00:00):\n"
data |> graph(scale)
end
defp times(screen_name) do
timeline(screen_name)
|> Stream.map(&extract_hour/1)
|> Enum.reduce(%{}, &group_hours/2)
end
defp timeline(screen_name) do
ExTwitter.user_timeline([screen_name: screen_name, include_rts: true, count: 200])
end
defp extract_hour(%{created_at: created_at} = _tweet) do
FTTZ.Time.string_to_time(created_at).hour
end
defp group_hours(hour, acc), do: Map.update(acc, hour, 1, fn x -> x + 1 end)
defp graph(data, :linear) do
hours = @hours
|> Enum.map(&Integer.to_string/1)
|> Enum.map(&(String.pad_leading(&1, 2, "0")))
|> Enum.join(" ")
max_count = data |> Map.values |> Enum.max
rows = for v <- 0..max_count do
for h <- @hours do
case max(0, (data[h] || 0) - v) do
0 -> " "
_ -> "██"
end
end
|> Enum.join(" ")
end
IO.puts hours
Enum.each(rows, &IO.puts/1)
end
defp graph(data, :log10) do
data
|> Enum.map(fn {k, v} -> {k, round(:math.log10(v) * 10)} end)
|> Enum.into(%{})
|> graph(:linear)
end
defp timezone(data) do
offset = data |> shift_to_daytime
# Not sure if that makes sense.
# Probably not since UTC spans over 26 offsets, not 24.
# Anyway, it's an OK approximation for this module.
offset = if offset > 12 do
(-offset + 23)
|> Integer.to_string
|> String.pad_leading(2, "0")
|> String.pad_leading(3, "+")
else
offset
|> Integer.to_string
|> String.pad_leading(2, "0")
|> String.pad_leading(3, "-")
end
IO.puts "Most likely time offset is UTC#{offset}:00"
end
defp shift_to_daytime(data), do: shift_to_daytime(data, 0, {0, 0})
defp shift_to_daytime(_data, 23, {best_iter, _}), do: best_iter
defp shift_to_daytime(data, iter, best = {_, best_count}) do
count = daytime_count(data)
best = if count > best_count, do: {iter, count}, else: best
shift_to_daytime(data |> shift_left, iter + 1, best)
end
defp daytime_count(data) do
Enum.reduce(@daytime, 0, fn (hour, acc) -> acc + Map.get(data, hour, 0) end)
end
defp shift_left(data) do
data
|> Enum.map(fn {k, v} -> {if(k - 1 < 0, do: 23, else: k - 1), v} end)
|> Enum.into(%{})
end
end
|
lib/fttz.ex
| 0.550124
| 0.40645
|
fttz.ex
|
starcoder
|
defmodule GGity.Scale.Shape do
@moduledoc false
alias GGity.{Draw, Labels}
alias GGity.Scale.Shape
@palette_values [:circle, :triangle, :square, :plus, :square_cross]
defstruct transform: nil,
levels: nil,
labels: :waivers,
guide: :legend
@type t() :: %__MODULE__{}
@spec new(keyword()) :: Shape.t()
def new(options \\ []), do: struct(Shape, options)
@spec train(Shape.t(), list(binary())) :: Shape.t()
def train(scale, [level | _other_levels] = levels) when is_list(levels) and is_binary(level) do
transform = GGity.Scale.Discrete.transform(levels, palette(levels))
struct(scale, levels: levels, transform: transform)
end
defp palette(levels) do
@palette_values
|> Stream.cycle()
|> Enum.take(length(levels))
end
@spec draw_legend(Shape.t(), binary(), number(), keyword()) :: iolist()
def draw_legend(%Shape{guide: :none}, _label, _key_height, _fixed_aesthetics), do: []
def draw_legend(%Shape{levels: [_]}, _label, _key_height, _fixed_aesthetics), do: []
def draw_legend(%Shape{levels: levels} = scale, label, key_height, fixed_aesthetics) do
[
Draw.text(
"#{label}",
x: "0",
y: "-5",
class: "gg-text gg-legend-title",
text_anchor: "left"
),
Stream.with_index(levels)
|> Enum.map(fn {level, index} ->
draw_legend_item(scale, {level, index}, key_height, fixed_aesthetics)
end)
]
end
defp draw_legend_item(scale, {level, index}, key_height, fixed_aesthetics) do
[
Draw.rect(
x: "0",
y: "#{key_height * index}",
height: key_height,
width: key_height,
class: "gg-legend-key"
),
GGity.Shapes.draw(
scale.transform.(level),
{key_height / 2, key_height / 2 + key_height * index},
:math.pow(1 + key_height / 3, 2),
fill: fixed_aesthetics[:fill],
color: fixed_aesthetics[:color],
fill_opacity: fixed_aesthetics[:alpha]
),
Draw.text(
"#{Labels.format(scale, level)}",
x: "#{5 + key_height}",
y: "#{10 + key_height * index}",
class: "gg-text gg-legend-text",
text_anchor: "left"
)
]
end
end
|
lib/ggity/scale/shape.ex
| 0.832849
| 0.449091
|
shape.ex
|
starcoder
|
defmodule TrentoWeb.OpenApi.Schema.ChecksCatalog do
@moduledoc false
require OpenApiSpex
alias OpenApiSpex.Schema
alias TrentoWeb.OpenApi.Schema.Provider
defmodule Check do
@moduledoc false
OpenApiSpex.schema(%{
title: "Check",
description: "An available check to be executed on the target infrastructure",
type: :object,
properties: %{
id: %Schema{type: :string, description: "Check ID", format: :uuid},
name: %Schema{type: :string, description: "Check Name"},
description: %Schema{type: :string, description: "Check Description"},
remediation: %Schema{type: :string, description: "Check Remediation"},
implementation: %Schema{type: :string, description: "Check Implementation"},
labels: %Schema{type: :string, description: "Check Labels"},
premium: %Schema{
type: :boolean,
description: "Indicates whether the current check is a Premium check"
},
group: %Schema{
type: :string,
description: "Check Group, available when requiring a Flat Catalog"
},
provider: Provider.SupportedProviders
}
})
end
defmodule FlatCatalog do
@moduledoc false
OpenApiSpex.schema(%{
title: "FlatCatalog",
description: "A flat list of the available Checks",
type: :array,
items: Check
})
end
defmodule ChecksGroup do
@moduledoc false
OpenApiSpex.schema(%{
title: "ChecksGroup",
description: "A Group of related Checks (Corosync, Pacemaker ...)",
type: :object,
properties: %{
group: %Schema{type: :string, description: "Group Name"},
checks: FlatCatalog
}
})
end
defmodule ProviderCatalog do
@moduledoc false
OpenApiSpex.schema(%{
title: "ProviderCatalog",
description: "A Provider specific Catalog, and respective values",
type: :object,
properties: %{
provider: %Schema{
title: "ChecksProvider",
type: :string,
description:
"The provider determining the values for the attached checks (azure, aws ...)",
enum: [:azure, :aws, :gcp, :default]
},
groups: %Schema{
title: "ChecksGroups",
description: "A list of ChecksGroup for the respective provider",
type: :array,
items: ChecksGroup
}
}
})
end
defmodule GroupedCatalog do
@moduledoc false
OpenApiSpex.schema(%{
title: "GroupedCatalog",
description:
"A list of available Checks: grouped by provider (azure, aws ...) and checks groups (Corosync, Pacemaker ...)",
type: :array,
items: ProviderCatalog
})
end
defmodule Catalog do
@moduledoc false
OpenApiSpex.schema(%{
title: "ChecksCatalog",
description: "A representation of the Checks Catalog",
oneOf: [
GroupedCatalog,
FlatCatalog
],
example: [
%{
groups: [
%{
checks: [
%{
description: "Corosync `token` timeout is set to `5000`\n",
id: "156F64",
implementation:
"---\n\n- name: \"{{ name }}.check\"\n lineinfile:\n path: /etc/corosync/corosync.conf\n regexp: '^(\\s+){{ key_name }}:'\n line: \"\\t{{ key_name }}: {{ expected[name] }}\"\n insertafter: 'totem {'\n register: config_updated\n when:\n - ansible_check_mode\n\n- block:\n - name: Post results\n import_role:\n name: post-results\n when:\n - ansible_check_mode\n vars:\n status: \"{{ config_updated is not changed }}\"",
labels: "generic",
name: "1.1.1",
premium: false,
remediation:
"## Abstract\nThe value of the Corosync `token` timeout is not set as recommended.\n\n## Remediation\n\nAdjust the corosync `token` timeout as recommended on the best practices, and reload the corosync configuration\n\n1. Set the correct `token` timeout in the totem session in the corosync config file `/etc/corosync/corosync.conf`. This action must be repeated in all nodes of the cluster.\n ```\n [...]\n totem { \n token: <timeout value> \n }\n [...]\n ``` \n2. Reload the corosync configuration:\n `crm corosync reload`\n\n## References\n- https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker\n"
},
%{
description: "Corosync is running with `token` timeout set to `5000`\n",
id: "53D035",
implementation:
"---\n\n- name: \"{{ name }}.check\"\n shell: 'corosync-cmapctl | grep \"runtime.config.totem.token (u32) = \" | sed \"s/^.*= //\"'\n check_mode: false\n register: config_updated\n changed_when: config_updated.stdout != expected['1.1.1']\n\n- block:\n - name: Post results\n import_role:\n name: post-results\n when:\n - ansible_check_mode\n vars:\n status: \"{{ config_updated is not changed }}\"",
labels: "generic",
name: "1.1.1.runtime",
premium: false,
remediation:
"## Abstract\nThe runtime value of the Corosync `token` timeout is not set as recommended.\n\n## Remediation\n\nAdjust the corosync `token` timeout as recommended on the best practices, and reload the corosync configuration\n\n\n1. Set the correct `token` timeout in the totem session in the corosync config file `/etc/corosync/corosync.conf`. This action must be repeated in all nodes of the cluster.\n ```\n [...]\n totem { \n token: <timeout value> \n }\n [...]\n ``` \n2. Reload the corosync configuration:\n `crm corosync reload`\n\n## References\n- https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker\n"
}
],
group: "Corosync"
},
%{
checks: [
%{
description: "Fencing is enabled in the cluster attributes\n",
id: "205AF7",
implementation:
"---\n\n- name: \"{{ name }}.check\"\n command: 'crm_attribute -t crm_config -G -n stonith-enabled --quiet'\n check_mode: false\n register: config_updated\n changed_when: config_updated.stdout != expected[name]\n\n- block:\n - name: Post results\n import_role:\n name: post-results\n when:\n - ansible_check_mode\n vars:\n status: \"{{ config_updated is not changed }}\"",
labels: "generic",
name: "1.2.1",
premium: false,
remediation:
"## Abstract\nFencing is mandatory to guarantee data integrity for your SAP Applications.\nRunning a HA Cluster without fencing is not supported and might cause data loss.\n\n## Remediation\nExecute the following command to enable it:\n```\ncrm configure property stonith-enabled=true\n```\n\n## References\n- https://documentation.suse.com/sle-ha/15-SP3/html/SLE-HA-all/cha-ha-fencing.html#sec-ha-fencing-recommend\n"
}
],
group: "Pacemaker"
}
],
provider: "aws"
}
]
})
end
defmodule CatalogNotfound do
@moduledoc false
OpenApiSpex.schema(%{
title: "CatalogNotfound",
description: "No Catalog was found for the provided query",
type: :object,
properties: %{
error: %Schema{
type: :string,
enum: [:not_found]
}
},
example: %{error: "not_found"}
})
end
defmodule UnableToLoadCatalog do
@moduledoc false
OpenApiSpex.schema(%{
title: "UnableToLoadCatalog",
description: "Something wrong happened while loading the catalog. ie: it is not ready yet",
type: :object,
properties: %{
error: %Schema{type: :string, description: "The error message"}
},
example: %{error: "(not_ready|some other error message)"}
})
end
end
|
lib/trento_web/openapi/schema/checks_catalog.ex
| 0.799521
| 0.463626
|
checks_catalog.ex
|
starcoder
|
defmodule Classifiers.Perceptron.Average do
defstruct weights: %{},
edges: %{},
count: 0,
epoch: 0
@stream_chunks 10
@doc """
Get a new classifier pid.
"""
def new do
{:ok, pid} = Agent.start_link fn ->
%Classifiers.Perceptron.Average{}
end
pid
end
@doc """
Fit a stream of data to an existing classifier.
Currently expects input in the form of a stream of maps as the following:
[ feature_1, feature_2, ... feature_n, class ]
"""
def fit(stream, pid, options \\ [epochs: 5]) do
1..options[:epochs] |> Enum.each fn epoch ->
stream |> Stream.chunk(@stream_chunks) |> Enum.each fn chunk ->
Agent.get_and_update pid, &update(&1, chunk, epoch)
end
end
end
@doc """
Predict the class for one set of features.
"""
def predict_one(row, pid) do
features = row |> Enum.with_index |> Enum.map(fn {a, b} -> {a, b} end)
classifier(pid) |> make_prediction(features, false)
end
@doc """
Predict the classes for a stream of features
"""
def predict(stream, pid) do
c = classifier(pid)
stream |> Stream.transform(0, fn row, acc ->
features = row |> Enum.with_index |> Enum.map(fn {a, b} -> {a, b} end)
{ [ c |> make_prediction(features, false) ], acc + 1 }
end)
end
defp update(classifier, chunk, epoch) do
c = chunk |> Enum.reduce classifier, fn row, classifier ->
{ label, features } = row |> split_label_and_features
classifier = case classifier |> make_prediction(features, true) do
nil ->
%{
classifier |
edges: classifier |> calculate_edges(label, features)
}
^label ->
classifier
prediction ->
%{
classifier |
edges: classifier |> calculate_edges(label, features, prediction)
}
end
%{
classifier |
count: classifier.count + 1,
epoch: epoch,
weights: classifier |> calculate_weights
}
end
{:ok, c}
end
defp split_label_and_features(row) do
label = row |> List.last
features = row |> Enum.drop(-1)
|> Enum.with_index
|> Enum.map(fn {a, b} -> {a,b} end)
{ label, features }
end
defp make_prediction(%{edges: edges}, _, true) when map_size(edges) == 0 do
end
defp make_prediction(%{edges: edges}, features, true) do
{p, _} = edges |> Enum.max_by fn { _, edge } ->
features |> Enum.reduce(0, fn feature, weight -> weight + Map.get(edge, feature, 0) end)
end
p
end
defp make_prediction(%{weights: weights}, features, false) do
{p, _} = weights |> Enum.max_by fn { _, weight } ->
features |> Enum.reduce(0, fn feature, w -> w + Map.get(weight, feature, 0) end)
end
p
end
defp calculate_edges(%{edges: edges}, label, features) do
edges |> Map.put(
label, features |> Enum.into(%{}, &({&1, 1}))
)
end
defp calculate_edges(%{edges: edges}, label, features, prediction) do
edges |> Map.update(
label, %{}, fn current ->
features |> Enum.reduce(
current, fn feature, current ->
current |> Map.update(feature, 0, &(&1 + 1))
end
)
end
) |> Map.update(
prediction, %{}, fn current ->
features |> Enum.reduce(
current, fn feature, current ->
current |> Map.update(feature, 0, &(&1 - 1))
end
)
end
)
end
defp calculate_weights(%{edges: edges, weights: feature_weights, count: count}) do
edges |> Enum.reduce(
feature_weights, fn { label, edges }, weights ->
target = weights |> Map.get(label, %{})
target = edges |> Enum.reduce(target, fn { feature, edge }, target ->
target |> Map.update(feature, 0, fn weight ->
(count * weight + edge) / (count + 1)
end)
end)
weights |> Map.update(label, %{}, fn w -> w |> Map.merge(target) end)
end
)
end
defp classifier(pid) do
Agent.get pid, fn c -> c end
end
end
|
lib/classifiers/perceptron/average.ex
| 0.839931
| 0.666166
|
average.ex
|
starcoder
|
defmodule Toby.Data.Server do
@moduledoc """
A caching layer on top of `Toby.Data.Provider` so that system information can
be retrieved on an interval independent of the window refresh rate.
"""
use GenServer
alias Toby.Data.{Provider, Samples}
@cache_ms 2000
def start_link(_) do
GenServer.start_link(__MODULE__, :ok, name: __MODULE__)
end
def fetch(pid \\ __MODULE__, key) do
GenServer.call(pid, {:fetch, key})
end
def fetch!(pid \\ __MODULE__, key) do
{:ok, value} = fetch(pid, key)
value
end
def set_sample_source(pid \\ __MODULE__, node) do
GenServer.call(pid, {:set_sample_source, node})
end
@impl true
def init(:ok) do
Process.send_after(self(), :sample, 100)
{:ok, %{cache: %{}, sample_source: Node.self(), samples: []}}
end
@impl true
def handle_call({:fetch, name}, _from, state) do
case fetch_cached(state, name) do
{:ok, value, state} ->
{:reply, {:ok, value}, state}
{:error, error} ->
{:reply, {:error, error}, state}
end
end
def handle_call({:set_sample_source, node}, _from, state) do
{:reply, :ok, %{state | sample_source: node, samples: []}}
end
@impl true
def handle_info(:sample, state) do
Process.send_after(self(), :sample, 1000)
new_samples = [Samples.collect(state.sample_source) | Enum.take(state.samples, 59)]
{:noreply, %{state | samples: new_samples}}
end
@impl true
def handle_info(_, state) do
{:noreply, state}
end
defp fetch_cached(state, key) do
case state.cache[key] do
{value, expires_at} ->
if expires_at > now() do
{:ok, value, state}
else
fetch_new(state, key)
end
_ ->
fetch_new(state, key)
end
end
defp fetch_new(state, key) do
with {:ok, new_value} <- Provider.provide(key, state.samples) do
{:ok, new_value, %{state | cache: put_cache_entry(state.cache, key, new_value)}}
end
end
defp put_cache_entry(cache, key, value) do
Map.put(cache, key, {value, now() + @cache_ms})
end
defp now, do: :erlang.monotonic_time(:millisecond)
end
|
lib/toby/data/server.ex
| 0.844714
| 0.454412
|
server.ex
|
starcoder
|
defmodule Vix.Vips.Image do
defstruct [:ref]
alias __MODULE__
@moduledoc """
Vips Image
"""
alias Vix.Type
alias Vix.Nif
@behaviour Type
@typedoc """
Represents an instance of libvips image
"""
@type t() :: %Image{ref: reference()}
@impl Type
def typespec do
quote do
unquote(__MODULE__).t()
end
end
@impl Type
def default(nil), do: :unsupported
@impl Type
def to_nif_term(image, _data) do
case image do
%Image{ref: ref} ->
ref
value ->
raise ArgumentError, message: "expected Vix.Vips.Image. given: #{inspect(value)}"
end
end
@impl Type
def to_erl_term(ref), do: %Image{ref: ref}
@doc """
Opens `path` for reading, returns an instance of `t:Vix.Vips.Image.t/0`
It can load files in many image formats, including VIPS, TIFF, PNG, JPEG, FITS, Matlab, OpenEXR, CSV, WebP, Radiance, RAW, PPM and others.
Load options may be appended to filename as "[name=value,...]". For example:
```elixir
Image.new_from_file("fred.jpg[shrink=2]")
```
Will open "fred.jpg", downsampling by a factor of two.
The full set of options available depend upon the load operation that will be executed. Try something like:
```shell
$ vips jpegload
```
at the command-line to see a summary of the available options for the JPEG loader.
Loading is fast: only enough of the image is loaded to be able to fill out the header. Pixels will only be decompressed when they are needed.
"""
@spec new_from_file(String.t()) :: {:ok, __MODULE__.t()} | {:error, term()}
def new_from_file(path) do
path = Path.expand(path)
Nif.nif_image_new_from_file(normalize_string(path))
|> wrap_type()
end
@doc """
Creates a new image with width, height, format, interpretation, resolution and offset taken from the input image, but with each band set from `value`.
"""
@spec new_from_image(__MODULE__.t(), [float()]) :: {:ok, __MODULE__.t()} | {:error, term()}
def new_from_image(%Image{ref: vips_image}, value) do
float_value = Enum.map(value, &Vix.GObject.Double.normalize/1)
Nif.nif_image_new_from_image(vips_image, float_value)
|> wrap_type()
end
# Copy an image to a memory area.
# If image is already a memory buffer, just ref and return. If it's
# a file on disc or a partial, allocate memory and copy the image to
# it. Intented to be used with draw operations when they are
# properly supported
@doc false
@spec copy_memory(__MODULE__.t()) :: {:ok, __MODULE__.t()} | {:error, term()}
def copy_memory(%Image{ref: vips_image}) do
Nif.nif_image_copy_memory(vips_image)
|> wrap_type()
end
@doc """
Write `vips_image` to a file.
Save options may be encoded in the filename or given as a hash. For example:
```elixir
Image.write_to_file(vips_image, "fred.jpg[Q=90]")
```
A saver is selected based on `path`. The full set of save options depend on the selected saver. Try something like:
```shell
$ vips jpegsave
```
at the command-line to see all the available options for JPEG save.
"""
@spec write_to_file(__MODULE__.t(), String.t()) :: :ok | {:error, term()}
def write_to_file(%Image{ref: vips_image}, path) do
Nif.nif_image_write_to_file(vips_image, normalize_string(path))
end
@doc """
Returns `vips_image` as binary based on the format specified by `suffix`. This function is similar to `write_to_file` but instead of writing the output to the file, it returns it as a binary.
Currently only TIFF, JPEG and PNG formats are supported.
Save options may be encoded in the filename or given as a hash. For example:
```elixir
Image.write_to_buffer(vips_image, ".jpg[Q=90]")
```
The full set of save options depend on the selected saver. You can get list of available options for the saver
```shell
$ vips jpegsave
```
"""
@spec write_to_buffer(__MODULE__.t(), String.t()) ::
{:ok, binary()} | {:error, term()}
def write_to_buffer(%Image{ref: vips_image}, suffix) do
Nif.nif_image_write_to_buffer(vips_image, normalize_string(suffix))
end
@doc """
Make a VipsImage which, when written to, will create a temporary file on disc.
The file will be automatically deleted when the image is destroyed. format is something like `"%s.v"` for a vips file.
The file is created in the temporary directory. This is set with the environment variable TMPDIR. If this is not set, then on Unix systems, vips will default to `/tmp`. On Windows, vips uses `GetTempPath()` to find the temporary directory.
```elixir
vips_image = Image.new_temp_file("%s.v")
```
"""
@spec new_temp_file(String.t()) :: {:ok, __MODULE__.t()} | {:error, term()}
def new_temp_file(format) do
Nif.nif_image_new_temp_file(normalize_string(format))
|> wrap_type()
end
@doc """
Make a VipsImage from list.
This convenience function makes an image which is a matrix: a one-band VIPS_FORMAT_DOUBLE image held in memory. Useful for vips operations such as `conv`.
```elixir
mask = Image.new_matrix_from_array(3, 3, [[0, 1, 0], [1, 1, 1], [0, 1, 0]])
```
## Optional
* scale - Default: 1
* offset - Default: 0
"""
@spec new_matrix_from_array(integer, integer, list(list), keyword()) ::
{:ok, __MODULE__.t()} | {:error, term()}
def new_matrix_from_array(width, height, list, optional \\ []) do
scale = to_double(optional[:scale], 1)
offset = to_double(optional[:offset], 0)
Nif.nif_image_new_matrix_from_array(width, height, flatten_list(list), scale, offset)
|> wrap_type()
end
@doc """
Get all image header field names.
See https://libvips.github.io/libvips/API/current/libvips-header.html#vips-image-get-fields for more details
"""
@spec header_field_names(__MODULE__.t()) :: {:ok, [String.t()]} | {:error, term()}
def header_field_names(%Image{ref: vips_image}) do
Nif.nif_image_get_fields(vips_image)
end
@doc """
Get image header value.
This is a generic function to get header value.
Casts the value to appropriate type. Returned value can be integer, float, string, binary, list. Use `Vix.Vips.Image.header_value_as_string/2` to get string representation of any header value.
```elixir
{:ok, width} = Image.header_value(vips_image, "width")
```
"""
@spec header_value(__MODULE__.t(), String.t()) ::
{:ok, integer() | float() | String.t() | binary() | list()} | {:error, term()}
def header_value(%Image{ref: vips_image}, name) do
value = Nif.nif_image_get_header(vips_image, normalize_string(name))
case value do
{:ok, {type, value}} ->
{:ok, Vix.Type.to_erl_term(type, value)}
{:error, reason} ->
{:error, reason}
end
end
@doc """
Get image header value as string.
This is generic method to get string representation of a header value. If value is VipsBlob, then it returns base64 encoded data.
See: https://libvips.github.io/libvips/API/current/libvips-header.html#vips-image-get-as-string
"""
@spec header_value_as_string(__MODULE__.t(), String.t()) :: {:ok, String.t()} | {:error, term()}
def header_value_as_string(%Image{ref: vips_image}, name) do
Nif.nif_image_get_as_string(vips_image, normalize_string(name))
end
for name <-
~w/width height bands xres yres xoffset yoffset filename mode scale offset page_height n_pages orientation interpretation coding format/ do
func_name = String.to_atom(name)
@doc """
Get #{name} of the the image
see: https://libvips.github.io/libvips/API/current/libvips-header.html#vips-image-get-#{String.replace(name, "_", "-")}
"""
@spec unquote(func_name)(__MODULE__.t()) :: term() | no_return()
def unquote(func_name)(vips_image) do
case header_value(vips_image, unquote(name)) do
{:ok, value} -> value
{:error, error} -> raise to_string(error)
end
end
end
defp normalize_string(str) when is_binary(str), do: str
defp normalize_string(str) when is_list(str), do: to_string(str)
defp flatten_list(list) do
Enum.flat_map(list, fn p ->
Enum.map(p, &to_double/1)
end)
end
defp to_double(v) when is_integer(v), do: v * 1.0
defp to_double(v) when is_float(v), do: v
defp to_double(nil, default), do: to_double(default)
defp to_double(v, _default), do: to_double(v)
defp wrap_type({:ok, ref}), do: {:ok, %Image{ref: ref}}
defp wrap_type(value), do: value
end
|
lib/vix/vips/image.ex
| 0.917585
| 0.851768
|
image.ex
|
starcoder
|
defmodule Zaryn.Governance.Code.Proposal do
@moduledoc """
Represents a proposal for code changes
"""
alias __MODULE__.Parser
alias Zaryn.Crypto
alias Zaryn.TransactionChain.Transaction
alias Zaryn.TransactionChain.Transaction.ValidationStamp
alias Zaryn.TransactionChain.TransactionData
defstruct [
:address,
:previous_public_key,
:timestamp,
:description,
:changes,
:version,
:files,
approvals: []
]
@type t :: %__MODULE__{
address: binary(),
previous_public_key: Crypto.key(),
timestamp: nil | DateTime.t(),
description: binary(),
changes: binary(),
version: binary(),
files: list(binary()),
approvals: list(binary())
}
@doc """
Create a code proposal from a transaction
"""
@spec from_transaction(Transaction.t()) ::
{:ok, t()}
| {:error, :missing_description}
| {:error, :missing_changes}
| {:error, :missing_version}
def from_transaction(%Transaction{
address: address,
data: %TransactionData{content: content},
previous_public_key: previous_public_key
}) do
with {:ok, description} <- Parser.get_description(content),
{:ok, changes} <- Parser.get_changes(content),
{:ok, version} <- Parser.get_version(changes) do
{:ok,
%__MODULE__{
address: address,
previous_public_key: previous_public_key,
description: description,
changes: changes,
version: version,
files: Parser.list_files(changes),
approvals: []
}}
end
end
def from_transaction(
tx = %Transaction{validation_stamp: %ValidationStamp{timestamp: timestamp}}
) do
case from_transaction(tx) do
{:ok, prop} ->
{:ok, %{prop | timestamp: timestamp}}
{:error, _} = e ->
e
end
end
@doc """
Add the approvals to the code proposal
## Examples
iex> %Proposal{}
...> |> Proposal.add_approvals([<<0, 145, 11, 248, 77, 93, 69, 102, 3, 217, 40, 238, 90, 2, 240, 137, 127, 242,
...> 124, 105, 141, 192, 142, 148, 132, 159, 146, 51, 214, 138, 64, 184, 230>>])
%Proposal{
approvals: [
<<0, 145, 11, 248, 77, 93, 69, 102, 3, 217, 40, 238, 90, 2, 240, 137, 127, 242,
124, 105, 141, 192, 142, 148, 132, 159, 146, 51, 214, 138, 64, 184, 230>>
]
}
"""
@spec add_approvals(t(), list(binary())) :: t()
def add_approvals(prop = %__MODULE__{}, approvals) do
%{prop | approvals: approvals}
end
@doc """
Add an approval to the code proposal
## Examples
iex> %Proposal{}
...> |> Proposal.add_approval(<<0, 145, 11, 248, 77, 93, 69, 102, 3, 217, 40, 238, 90, 2, 240, 137, 127, 242,
...> 124, 105, 141, 192, 142, 148, 132, 159, 146, 51, 214, 138, 64, 184, 230>>)
%Proposal{
approvals: [
<<0, 145, 11, 248, 77, 93, 69, 102, 3, 217, 40, 238, 90, 2, 240, 137, 127, 242,
124, 105, 141, 192, 142, 148, 132, 159, 146, 51, 214, 138, 64, 184, 230>>
]
}
"""
@spec add_approval(t(), binary()) :: t()
def add_approval(prop = %__MODULE__{}, address) when is_binary(address) do
Map.update(prop, :approvals, [address], &[address | &1])
end
@doc """
Determine code proposal TestNets ports
## Examples
iex> %Proposal{ timestamp: ~U[2020-08-17 08:10:16.338088Z] } |> Proposal.testnet_ports()
{ 11296, 16885 }
"""
@spec testnet_ports(t()) ::
{p2p_port :: :inet.port_number(), web_port :: :inet.port_number()}
def testnet_ports(%__MODULE__{timestamp: timestamp}) do
{
rem(DateTime.to_unix(timestamp), 12_345),
rem(DateTime.to_unix(timestamp), 54_321)
}
end
@doc """
Determines if the code approval have been signed by the given address
## Examples
iex> %Proposal{}
...> |> Proposal.add_approval(<<0, 145, 11, 248, 77, 93, 69, 102, 3, 217, 40, 238, 90, 2, 240, 137, 127, 242,
...> 124, 105, 141, 192, 142, 148, 132, 159, 146, 51, 214, 138, 64, 184, 230>>)
...> |> Proposal.signed_by?(<<0, 145, 11, 248, 77, 93, 69, 102, 3, 217, 40, 238, 90, 2, 240, 137, 127, 242,
...> 124, 105, 141, 192, 142, 148, 132, 159, 146, 51, 214, 138, 64, 184, 230>>)
true
"""
@spec signed_by?(t(), binary()) :: boolean()
def signed_by?(%__MODULE__{approvals: approvals}, address) do
address in approvals
end
end
|
lib/zaryn/governance/code/proposal.ex
| 0.851274
| 0.404566
|
proposal.ex
|
starcoder
|
defmodule Calypte.Rule do
@moduledoc """
Execution of a rule. Works as interpreter at the moment.
"""
alias Calypte.Ast.{Expr, Var, Value}
alias Calypte.{Binding, Changeset, Rule, Utils}
import Utils
@type id :: term()
@type exec_id :: {id(), Binding.hash()}
defstruct id: nil, if: nil, vars: %{}, then: nil, meta: %{}, modified_vars: %{}
@math_operations [:+, :-, :*, :/]
@comparisons [:>, :>=, :<, :<=, :==]
def id(%__MODULE__{id: id}), do: id
@doc """
Evaluate expression in a match mode
"""
def match(expr_list, binding) when is_list(expr_list) do
expr_list |> match_list(binding) |> List.flatten()
end
def match(expr, binding) do
match_one(expr, binding)
end
def match_list([], binding), do: binding
def match_list([expr | expr_list], binding) do
for binding <- match_one(expr, binding), do: match_list(expr_list, binding)
end
def match_one(expr, %Binding{matches: matches, nodes: nodes} = binding) do
for values <- do_match(expr, matches, nodes), values != [] do
new_matches = values |> List.wrap() |> Enum.reduce(matches, &update_matches/2)
%Binding{binding | matches: new_matches}
end
end
defp do_match(%Expr{type: type, left: left, right: right}, matches, nodes)
when type in @math_operations do
for left_value <- do_match(left, matches, nodes),
right_value <- do_match(right, matches, nodes) do
apply_expr(type, unwrap_value(left_value), unwrap_value(right_value))
end
end
defp do_match(%Expr{type: :=, left: %Var{name: name, attr: attr}} = expr, matches, nodes) do
%Expr{right: right} = expr
for right_value <- do_match(right, matches, nodes) do
value = with {{_, _}, value} <- right_value, do: value
{{name, attr}, to_value(value)}
end
end
defp do_match(%Expr{type: :default, left: left, right: right}, matches, nodes) do
with [] <- do_match(left, matches, nodes) do
%Var{name: name, attr: attr} = left
for right_value <- do_match(right, matches, nodes),
do: {{name, attr}, to_value(right_value, true)}
end
end
defp do_match(%Expr{type: type, left: left, right: right} = _expr, matches, nodes)
when type in @comparisons do
for left_value <- do_match(left, matches, nodes),
right_value <- do_match(right, matches, nodes) do
if apply_expr(type, unwrap_value(left_value), unwrap_value(right_value)),
do: [left_value, right_value],
else: []
end
end
defp do_match(%Var{name: name, attr: attr}, matches, nodes) do
attributes = matches[name][attr] || nodes[name][attr]
for value <- List.wrap(attributes), do: {{name, attr}, value}
end
defp do_match(%Value{val: value}, _, _nodes), do: [from_value(value)]
defp unwrap_value({{_, _}, value}), do: from_value(value)
defp unwrap_value(value), do: value
for operator <- @comparisons ++ @math_operations do
defp apply_expr(unquote(operator), left, right), do: unquote(operator)(left, right)
end
def update_matches({{name, attr}, value}, matches), do: deep_put(matches, [name, attr], value)
def update_matches(_, matches), do: matches
@doc """
Execute binding
"""
def exec(%Binding{rule: %Rule{then: then}, matches: old_matches} = binding) do
%Binding{matches: matches} =
new_binding =
Enum.reduce(then, binding, fn expr, binding ->
[binding] = match(expr, binding)
binding
end)
new_binding = %{new_binding | matches: old_matches, updated_matches: matches}
{new_binding, Changeset.from_binding(new_binding)}
end
@doc """
Check if attribute will be modified or not by rule
"""
def modified_var?(%__MODULE__{modified_vars: modified_vars}, var, attr) do
with nil <- modified_vars[var][attr], do: false
end
end
|
lib/calypte/rule.ex
| 0.752922
| 0.477371
|
rule.ex
|
starcoder
|
defmodule Graph.Pathfindings.Dijkstra do
@moduledoc """
This module contains implementation code for path finding algorithms used by `libgraph`.
"""
import Graph.Utils, only: [vertex_id: 1, edge_weight: 3]
@type heuristic_fun :: (Graph.vertex() -> integer)
@compile {:inline, do_bfs: 4, construct_path: 3, construct_path: 4, calculate_cost: 4}
@doc """
Finds the shortest path between `a` and all other reachable vertices, returning a map from vertex to path.
Returns `nil` if no paths can be found.
The shortest path is calculated here by using a cost function to choose
which path to explore next. The cost function in Dijkstra's algorithm is
`weight(E(A, B))+lower_bound(E(A, B))` where `lower_bound(E(A, B))` is always 0.
"""
@spec call(Graph.t(), Graph.vertex()) :: %{optional(Graph.vertex()) => [Graph.vertex()]} | nil
def call(%Graph{} = g, a) do
a_star(g, a, fn _v -> 0 end)
end
@doc """
Finds the shortest path between `a` and all other vertices.
Returns `nil` if paths cannot be found.
This implementation takes a heuristic function which allows you to
calculate the lower bound cost of a given vertex `v`. The algorithm
then uses that lower bound function to determine which path to explore
next in the graph.
The `dijkstra` function is simply `a_star` where the heuristic function
always returns 0, and thus the next vertex is chosen based on the weight of
the edge between it and the current vertex.
"""
@spec a_star(Graph.t(), Graph.vertex(), heuristic_fun) ::
%{
optional(Graph.vertex()) => [Graph.vertex()]
}
| nil
def a_star(%Graph{vertices: vertices, out_edges: out_edges} = graph, a, hfun)
when is_function(hfun, 1) do
with a_id <- vertex_id(a),
{:ok, vertex_a_out_edges} <- Map.fetch(out_edges, a_id) do
shortest_path_tree =
Graph.new()
|> Graph.add_vertex(a_id)
initialized_queue =
Enum.reduce(vertex_a_out_edges, PriorityQueue.new(), fn b_id, queue ->
queue_cost = calculate_cost(graph, a_id, b_id, hfun)
a_to_b_weight = edge_weight(graph, a_id, b_id)
PriorityQueue.push(
queue,
{a_id, b_id, a_to_b_weight},
queue_cost
)
end)
complete_spt =
do_bfs(
initialized_queue,
graph,
shortest_path_tree,
hfun
)
to_path_map(complete_spt, vertices)
else
_ ->
nil
end
end
# Graph inspection convenience method
def id_graph_to_original(
%Graph{edges: edges, vertices: vertices},
vs
) do
collected_vertices =
Enum.reduce(vertices, Graph.new(), fn {_orig, vertex}, graph ->
Graph.add_vertex(graph, Map.get(vs, vertex))
end)
final_graph =
edges
|> Map.keys()
|> Enum.reduce(collected_vertices, fn {v_from, v_to}, graph ->
original_from = Map.get(vs, Map.get(vertices, v_from))
original_to = Map.get(vs, Map.get(vertices, v_to))
Graph.add_edge(graph, Graph.Edge.new(original_from, original_to))
end)
final_graph
end
## Private
defp calculate_cost(%Graph{vertices: vertices} = g, v1_id, v2_id, hfun) do
edge_weight(g, v1_id, v2_id) + hfun.(Map.get(vertices, v2_id))
end
defp do_bfs(
queue,
%Graph{out_edges: oe} = graph,
%Graph{vertices: spt_vertices} = shortest_path_tree,
hfun
) do
case PriorityQueue.pop(queue) do
{{:value, {a_id, b_id, a_to_b_weight}}, remaining_queue} ->
b_id_in_spt = Graph.Utils.vertex_id(b_id)
if Map.has_key?(spt_vertices, b_id_in_spt) do
do_bfs(remaining_queue, graph, shortest_path_tree, hfun)
else
case Map.get(oe, b_id) do
nil ->
updated_shortest_path_tree =
shortest_path_tree
|> Graph.add_vertex(b_id)
|> Graph.add_edge(b_id, a_id)
do_bfs(remaining_queue, graph, updated_shortest_path_tree, hfun)
b_out ->
updated_shortest_path_tree =
shortest_path_tree
|> Graph.add_vertex(b_id)
|> Graph.add_edge(b_id, a_id)
new_queue =
Enum.reduce(b_out, remaining_queue, fn c_id, queue_acc ->
queue_cost = a_to_b_weight + calculate_cost(graph, b_id, c_id, hfun)
PriorityQueue.push(
queue_acc,
{b_id, c_id, a_to_b_weight + edge_weight(graph, b_id, c_id)},
queue_cost
)
end)
do_bfs(new_queue, graph, updated_shortest_path_tree, hfun)
end
end
{:empty, _} ->
shortest_path_tree
end
end
defp construct_path(
v_id_spt,
vertices,
%Graph{} = shortest_path_tree
) do
construct_path(v_id_spt, vertices, shortest_path_tree, [])
end
defp construct_path(
v_id_spt,
vertices,
%Graph{vertices: vertices_spt, out_edges: out_edges_spt} = shortest_path_tree,
path
) do
v_id_actual = Map.get(vertices_spt, v_id_spt)
vertex = Map.get(vertices, v_id_actual)
case vertex do
nil ->
path
valid_vertex ->
updated_path = [valid_vertex | path]
explorable_edges =
out_edges_spt
|> Map.get(v_id_spt, MapSet.new())
|> MapSet.to_list()
case explorable_edges do
[] ->
updated_path
[next_v_id_spt] ->
construct_path(next_v_id_spt, vertices, shortest_path_tree, updated_path)
end
end
end
defp to_path_map(
%Graph{vertices: vertices_spt} = shortest_path_tree,
vertices
) do
Enum.reduce(vertices_spt, %{}, fn {v_id_spt, v_id}, acc ->
path = construct_path(v_id_spt, vertices, shortest_path_tree)
vertex = Map.get(vertices, v_id)
Map.put(acc, vertex, path)
end)
end
end
|
lib/graph/pathfindings/dijkstra.ex
| 0.874681
| 0.719211
|
dijkstra.ex
|
starcoder
|
defmodule BoggleEngine do
@moduledoc """
Boggle board generator and solver.
Versions:
* `:boggle` - 4x4
* `:big_boggle` - 5x5
* `:super_big_boggle` - 6x6
Rules:
* `:standard` - All neighbors are valid
* `:edge` - Only edge neighbors
* `:corner` - Only corner neighbors
* `:wrap` - Out of bounds will map to neighbors
* `:edge_wrap` - Only edge neighbors that are out of bounds will remap
* `:corner_wrap` - Only corner neighbors that are out of bounds will remap
# Example
game = BoggleEngine.new_game(:boggle)
board_string = BoggleEngine.to_string(game)
words = BoggleEngine.solve(game)
score = BoggleEngine.score(words, :boggle)
"""
alias BoggleEngine.Board
alias BoggleEngine.Board.Solver
alias BoggleEngine.Neighbor
alias BoggleEngine.Score
@minimum_word_length %{boggle: 3, big_boggle: 4, super_big_boggle: 4}
defstruct [:board]
@type t :: %__MODULE__{
board: Board.t
}
@type rule :: Neighbor.rule
@type version :: Board.version
@type score :: Score.score
@doc """
Creates new `BoggleEngine` game.
## Examples
iex> BoggleEngine.new_game(:boggle)
#BoggleEngine<version: boggle>
"""
@spec new_game(version) :: t
def new_game(version) do
board = Board.new_board(version)
%BoggleEngine{board: board}
end
@doc """
Creates `BoggleEngine` game from string. String will be truncated if bigger
than board. String will have blanks appended if shorter than board. Each board
tile will start with an uppercase letter and may optionally contain trailing
lowercase letters.
## Examples
iex> BoggleEngine.from_string("SAILZZZZZZZZZZZZ", :boggle)
#BoggleEngine<version: boggle>
"""
@spec from_string(String.t, version) :: t
def from_string(string, version) do
board = Board.from_string(string, version)
%BoggleEngine{board: board}
end
@doc """
Gets string representation of `BoggleEngine`'s board.
## Examples
iex> game = BoggleEngine.from_string("SAILZZZZZZZZZZZZ", :boggle)
iex> BoggleEngine.to_string(game)
"SAILZZZZZZZZZZZZ"
"""
@spec to_string(t) :: String.t
def to_string(%BoggleEngine{board: board}) do
Board.to_string(board)
end
@doc """
Gets version of `BoggleEngine` game.
## Examples
iex> game = BoggleEngine.new_game(:boggle)
iex> BoggleEngine.get_version(game)
:boggle
"""
@spec get_version(t) :: version
def get_version(%BoggleEngine{board: board}) do
Board.get_version(board)
end
@doc """
Finds all words on board.
## Options
* `:rule` - Default is `:standard`. Present rules are specified by their
atoms. Custom rules are defined by a transform function.
* `:minimum_word_length` - Default is 3 for Boggle and 4 for Big Boggle and
Super Big Boggle.
* `:lexicon` - Default is built-in lexicon. Accepts custom lexicon from
`Lexicon`.
## Examples
iex> game = BoggleEngine.from_string("SAILZZZZZZZZZZZZ", :boggle)
iex> BoggleEngine.solve(game) |> Enum.sort()
["ail", "sail"]
"""
@spec solve(t, keyword) :: [String.t]
def solve(%BoggleEngine{board: board}, options \\ []) do
rule = Keyword.get(options, :rule, :standard)
version = Board.get_version(board)
minimum_word_length = Keyword.get(options, :minimum_word_length, @minimum_word_length[version])
solve_options =
if Keyword.has_key?(options, :lexicon) do
lexicon = Keyword.get(options, :lexicon)
[lexicon: lexicon]
else
[]
end
Solver.solve(board, rule, minimum_word_length, solve_options)
end
@doc """
Calculates score of word list.
## Examples
iex> list = ["ail", "sail"]
iex> BoggleEngine.score(list, :boggle)
2
"""
@spec score([String.t], version) :: score
def score(list, version) do
Score.score_list(list, version)
end
defimpl Inspect do
def inspect(%BoggleEngine{board: board}, _opts) do
version = Board.get_version(board)
"#BoggleEngine<version: #{version}>"
end
end
end
|
lib/boggle_engine.ex
| 0.92462
| 0.761893
|
boggle_engine.ex
|
starcoder
|
defmodule Wand.Mode do
@type t :: :caret | :tilde | :exact | :custom
@type requirement :: String.t() | {:latest, t}
@type version :: String.t() | :latest | Version.t()
@no_patch ~r/^(\d+)\.(\d+)($|\+.*$|-.*$)/
@moduledoc """
Each requirement in a wand.json follows some type of pattern. An exact pattern is one where the version is exactly specified. A tilde mode allows the patch version to be updated, and the caret mode allows the minor version to be updated.
"""
@doc """
Determine the mode that the requirement is currently using. If the requirement is not one of the types set by wand, it is marked as custom.
## Examples
iex> Wand.Mode.from_requirement("== 2.3.2")
:exact
iex> Wand.Mode.from_requirement("~> 2.3.2")
:tilde
iex> Wand.Mode.from_requirement(">= 2.3.2 and < 3.0.0")
:caret
iex> Wand.Mode.from_requirement(">= 2.3.2 and < 3.0.0 and != 2.3.3")
:custom
"""
@spec from_requirement(requirement) :: t
def from_requirement(:latest), do: :caret
def from_requirement(requirement) do
case Version.Parser.lexer(requirement, []) do
[:==, _] -> :exact
[:~>, _] -> :tilde
[:>=, _, :&&, :<=, _] = p -> parse_caret_patch(p)
[:>=, _, :&&, :<, _] = p -> parse_caret(p)
_ -> :custom
end
end
@doc """
Given a mode and a version, calculate the requirement that combines both part. Throws an exception on failure.
## Examples
iex> Wand.Mode.get_requirement!(:exact, "2.3.1")
"== 2.3.1"
iex> Wand.Mode.get_requirement!(:tilde, "2.3.1")
"~> 2.3.1"
iex> Wand.Mode.get_requirement!(:caret, "2.3.1")
">= 2.3.1 and < 3.0.0"
iex> Wand.Mode.get_requirement!(:caret, "0.3.3")
">= 0.3.3 and < 0.4.0"
"""
@spec get_requirement!(t, version) :: requirement
def get_requirement!(mode, version) do
{:ok, requirement} = get_requirement(mode, version)
requirement
end
@doc """
See `Wand.Mode.get_requirement!`
On success, this returns {:ok, requirement}, and on failure, {:error, :invalid_version} is returned
"""
@spec get_requirement(t, version) :: {:ok, requirement} | {:error, :invalid_version}
def get_requirement(mode, :latest), do: {:ok, {:latest, mode}}
def get_requirement(mode, version) when is_binary(version) do
case parse(version) do
{:ok, version} -> {:ok, get_requirement(mode, version)}
:error -> {:error, :invalid_version}
end
end
def get_requirement(:caret, %Version{major: 0, minor: 0} = version) do
">= #{version} and <= #{version}"
end
def get_requirement(:caret, %Version{major: 0, minor: minor} = version) do
">= #{version} and < 0.#{minor + 1}.0"
end
def get_requirement(:caret, %Version{major: major} = version) do
">= #{version} and < #{major + 1}.0.0"
end
def get_requirement(:exact, %Version{} = version) do
"== #{version}"
end
def get_requirement(:tilde, %Version{} = version) do
"~> #{version}"
end
defp add_missing_patch(version) do
version
|> String.replace(@no_patch, "\\1.\\2.0\\3")
end
defp parse(version) do
add_missing_patch(version) |> Version.parse()
end
defp parse_caret_patch([:>=, old, :&&, :<=, new]) do
case Version.compare(old, new) do
:eq -> :caret
_ -> :custom
end
end
defp parse_caret([:>=, old, :&&, :<, new]) do
old = Version.parse!(old)
new = Version.parse!(new)
cond do
# patches are handled by parse_caret_patch
# So anything that gets to here is not a caret.
new.major == 0 and new.minor == 0 ->
:custom
new.patch != 0 ->
:custom
Version.compare(old, new) != :lt ->
:custom
# check for ^0.1.3
new.major == 0 and old.major == 0 and old.minor + 1 == new.minor ->
:caret
# throw away all other cases of 0.x
new.major == 0 or new.minor != 0 or old.major == 0 ->
:custom
# check for ^1.2.3
new.major == old.major + 1 ->
:caret
true ->
:custom
end
end
end
|
lib/mode.ex
| 0.827759
| 0.407599
|
mode.ex
|
starcoder
|
alias Graphqexl.Schema
alias Graphqexl.Schema.{
Argument,
Field,
Interface,
Mutation,
Query,
Ref,
Subscription,
TEnum,
Type,
Union,
}
alias Graphqexl.Tokens
alias Treex.Tree
defmodule Graphqexl.Schema.Dsl do
@moduledoc """
Domain-Specific Language for expressing and parsing a GQL string as a `t:Graphqexl.Schema.t/0`
"""
@type gql :: String.t
@doc """
Creates a new enum from the given spec
Returns `t:Graphqexl.Schema.t/0`
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec enum(Schema.t, atom, String.t):: Schema.t
def enum(schema, name, values),
do: schema |> Schema.register(%TEnum{name: name, values: values |> parse_enum_values})
@doc """
Creates a new interface from the given spec
Returns `t:Graphqexl.Schema.t/0`
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec interface(Schema.t, atom, Tree.t):: Schema.t
def interface(schema, name, fields) do
schema
|> Schema.register(
%Interface{
name: name,
fields: fields |> parse_fields
}
)
end
@doc """
Creates a new mutation from the given spec
Returns `t:Graphqexl.Schema.t/0`
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec mutation(Schema.t, String.t):: Schema.t
def mutation(schema, spec) do
%{"args" => args, "name" => name, "return" => return} =
:operation |> Tokens.patterns |> Regex.named_captures(spec)
schema
|> Schema.register(
%Mutation{
arguments: args |> parse_query_args,
name: name |> String.to_atom,
return: %Ref{type: return |> String.to_atom}
}
)
end
@doc """
Prepares the graphql schema dsl string for parsing
Returns `t:String.t/0`
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec preprocess(gql):: String.t
def preprocess(gql) do
gql
|> strip
|> transform
|> replace
|> compact
end
@doc """
Creates a new query from the given spec
Returns `t:Graphqexl.Schema.t/0`
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec query(Schema.t, String.t):: Schema.t
def query(schema, spec) do
%{"args" => args, "name" => name, "return" => return} =
:operation |> Tokens.patterns |> Regex.named_captures(spec)
schema
|> Schema.register(
%Query{
arguments: args |> parse_query_args,
name: name |> String.to_atom,
return: return |> parse_field_values
}
)
end
@doc """
Builds a schema tree from the given spec, using the passed in `t:Graphqexl.Schema.t/0` to
extrapolate to scalar leaves.
Returns: `t:Graphqexl.Schema.t/0`
"""
@doc since: "0.1.0"
@spec schema(Schema.t, String.t):: Schema.t
def schema(schema, _spec) do
# TODO: check the spec string to see which operations are actually defined.
# Could also always include children for all three, and have some possibly be empty
%{
schema |
tree: %Tree{
value: :schema,
children: [
%Tree{
value: :query,
children: schema.queries
|> Map.values
|> Enum.map(&(%Tree{value: &1.name, children: []}))
},
%Tree{
value: :mutation,
children: schema.mutations
|> Map.values
|> Enum.map(&(%Tree{value: &1.name, children: []}))
}
]
}
}
end
@doc """
Creates a new subscription from the given spec
Returns `t:Graphqexl.Schema.t/0`
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec subscription(Schema.t, String.t):: Schema.t
def subscription(schema, spec) do
%{"args" => args, "name" => name, "return" => return} =
:operation |> Tokens.patterns |> Regex.named_captures(spec)
schema
|> Schema.register(
%Subscription{
arguments: args |> parse_query_args,
name: name |> String.to_atom,
return: %Ref{type: return |> String.to_atom}
}
)
end
@doc """
Creates a new type from the given spec
Returns %Graphqexl.Schema{}
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec type(Schema.t, atom, String.t):: Schema.t
@spec type(Schema.t, atom, String.t | nil, Tree.t):: Schema.t
def type(schema, name, implements) do
schema
|> Schema.register(
%Type{
name: name,
implements: %Ref{type: implements |> String.to_atom}
}
)
end
def type(schema, name, nil, fields) do
schema
|> Schema.register(
%Type{
name: name,
implements: nil,
fields: fields |> parse_fields
}
)
end
def type(schema, name, implements, fields) do
schema |>
Schema.register(
%Type{
name: name,
implements: %Ref{type: implements |> String.to_atom},
fields: fields |> parse_fields
}
)
end
@doc """
Creates a new union from the given spec
Returns `t:Graphqexl.Schema.t/0`
TODO: docstring examples
"""
@doc since: "0.1.0"
@spec union(Schema.t, atom, String.t, String.t):: Schema.t
def union(schema, name, type1, type2) do
schema
|> Schema.register(
%Union{
name: name,
type1: %Ref{type: type1 |> String.to_atom},
type2: %Ref{type: type2 |> String.to_atom}
}
)
end
@doc false
defp atomize_field_value(replace, value) when is_list(replace),
do: replace |> Enum.reduce(value, &atomize_field_value/2)
@doc false
defp atomize_field_value(replace, value) when is_atom(value),
do: replace |> atomize_field_value(value |> Atom.to_string)
@doc false
defp atomize_field_value(replace, value) when is_binary(value) do
value
|> String.replace(replace, "")
|> String.to_atom
end
@doc false
defp compact(gql) do
gql
|> String.replace(:fields |> Tokens.get |> Map.get(:open), "")
|> String.replace(:fields |> Tokens.get |> Map.get(:close), "")
|> String.replace(:ignored_whitespace |> Tokens.get, :space |> Tokens.get)
|> regex_replace(:significant_whitespace |> Tokens.patterns, :space |> Tokens.get)
|> String.replace(:operation_delimiter |> Tokens.get, :newline |> Tokens.get)
|> String.replace(:trailing_space |> Tokens.patterns, :newline |> Tokens.get)
|> String.replace("#{:argument_delimiter |> Tokens.get}#{:space |> Tokens.get}", :argument_delimiter |> Tokens.get)
|> String.trim
end
@doc false
defp list_field_value(component = %Ref{}) do
[
%Ref{
type: [
:list
|> Tokens.get
|> Map.get(:open),
:list
|> Tokens.get
|> Map.get(:close)
]
|> atomize_field_value(component.type)
}]
end
@doc false
defp list?(component = %{type: _}), do: component.type |> list?
@doc false
defp list?(value) when is_atom(value), do: list?(value |> Atom.to_string)
@doc false
defp list?(value), do: value |> String.contains?(:list |> Tokens.get |> Map.get(:open))
@doc false
defp maybe_list(ref) do
if ref.type |> list? do ref |> list_field_value else ref end
end
@doc false
defp parse_enum_values(values) do
values |> Enum.map(&String.to_atom/1)
end
@doc false
defp parse_fields(fields) do
fields
|> Enum.map(&(&1 |> String.split(:argument_delimiter |> Tokens.get)))
|> Enum.map(&(
{
&1 |> List.first |> String.to_atom,
%Field{
name: &1 |> List.first |> String.to_atom,
required: &1 |> List.last |> required?,
value: &1 |> List.last |> parse_field_values
}
}))
|> Enum.into(%{})
end
@doc false
defp parse_field_values(value) do
%Ref{
type: value
|> String.replace(:required |> Tokens.get, "")
|> String.to_atom
} |> maybe_list
end
@doc false
defp parse_query_args(args) do
args
|> String.split(:argument_placeholder_separator |> Tokens.get)
|> Enum.map(&(&1 |> String.split(:argument_delimiter |> Tokens.get)))
|> Enum.map(&(
{
&1 |> List.first |> String.to_atom,
%Argument{
name: &1 |> List.first |> String.to_atom,
required: &1 |> List.last |> String.contains?(:required |> Tokens.get),
type: &1 |> List.last |> parse_field_values
}
}
))
|> Enum.into(%{})
end
@doc false
defp regex_unionize(patterns) do
patterns
|> Map.values
|> Enum.join("|")
end
@doc false
defp replace(gql) do
gql
|> regex_replace(:union_type_separator |> Tokens.patterns, :space |> Tokens.get)
|> String.replace(:ignored_delimiter |> Tokens.get, "")
end
@doc false
defp required?(value), do: value |> String.contains?(:required |> Tokens.get)
@doc false
defp regex_replace(string, pattern, replacement),
do: pattern |> Regex.replace(string, replacement)
@doc false
defp strip(gql), do: gql |> regex_replace(:comment |> Tokens.patterns, "")
@doc false
defp transform(gql) do
gql
|> regex_replace(:custom_scalar |> Tokens.patterns, "\\g{1} #{:custom_scalar_placeholder |> Tokens.get}")
|> regex_replace(:union |> Tokens.patterns, "\\g{1}#{:space |> Tokens.get}")
|> regex_replace(:argument |> Tokens.patterns, "\\g{1}#{:argument_delimiter |> Tokens.get}\\g{2}")
|> regex_replace(:implements |> Tokens.patterns, "#{:space |> Tokens.get}\\g{1}")
|> regex_replace(
~r/(#{:operations |> Tokens.get |> regex_unionize})/,
"#{:operation_delimiter |> Tokens.get}\\g{1}"
)
end
end
|
lib/graphqexl/schema/dsl.ex
| 0.713631
| 0.619068
|
dsl.ex
|
starcoder
|
defmodule RRPproxy do
@moduledoc """
Documentation for `RRPproxy` which provides API for rrpproxy.net.
## Installation
This package can be installed by adding `rrpproxy` to your list of dependencies in `mix.exs`:
```elixir
def deps do
[
{:rrpproxy, "~> 0.1.7"}
]
end
```
## Configuration
Put the following lines into your `config.exs` or better, into your environment
configuration files like `test.exs`, `dev.exs` or `prod.exs`.
```elixir
config :rrpproxy,
username: "<your login>",
password: "<<PASSWORD>>",
ote: true
```
## Usage Examples
Check for a free domain, where `false` means "not available" and `true` means "available":
```elixir
iex> RRPproxy.status_domain("example.com")
{:ok, false}
```
"""
alias RRPproxy.Client
alias RRPproxy.Connection
defp fix_attrs(attrs),
do: Enum.flat_map(attrs, fn {k, v} -> [{to_atom(k), to_value(v)}] end)
defp to_value(true), do: "1"
defp to_value(false), do: "0"
defp to_value(other), do: other
defp to_atom(key) when is_atom(key), do: key
defp to_atom(key), do: String.to_existing_atom(key)
@type return() :: {:ok, any()} | {:error, any()}
@type integer_opt() :: integer() | nil
@type boolean_opt() :: boolean() | nil
@type string_opt() :: String.t() | nil
@type client_opt() :: Client.t() | nil
# Account
@doc """
status_account returns information about the accounts financial status.
"""
@spec status_account() :: return
@spec status_account(client_opt) :: return
def status_account(client \\ Client.new()) do
with {:ok, %{code: 200, data: [status]}} <- Connection.call("StatusAccount", [], client) do
{:ok, status}
end
end
@doc """
status_registrar returns information about your account information.
"""
@spec status_registrar(client_opt) :: return
def status_registrar(client \\ Client.new()) do
with {:ok, %{code: 200, data: statuses}} <- Connection.call("StatusRegistrar", [], client) do
{:ok, Enum.find(statuses, fn status -> Map.has_key?(status, :language) end)}
end
end
@doc """
modify_registrar modifies the registrar's (or subaccounts) settings.
"""
@spec modify_registrar(keyword(), client_opt) :: return
def modify_registrar(registrar, client \\ Client.new()) do
with {:ok, _} <- Connection.call("ModifyRegistrar", registrar, client) do
:ok
end
end
@doc """
query_appendix_list returns a list of all appendices.
"""
@spec query_appendix_list(integer_opt(), integer_opt(), client_opt) :: return
def query_appendix_list(offset \\ 0, limit \\ 1000, client \\ Client.new()) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: appendices, info: info}} <-
Connection.call("QueryAppendixList", params, client) do
{:ok, appendices, info}
end
end
@x_accept_tac String.to_atom("X-ACCEPT-TAC")
@doc """
activate_appendix activates an appendix.
"""
@spec activate_appendix(String.t(), boolean_opt(), client_opt) :: return
def activate_appendix(
appendix,
accept_terms_and_conditions \\ true,
client \\ Client.new()
) do
accept_tac = if accept_terms_and_conditions, do: 1, else: 0
params = [appendix: appendix] ++ [{@x_accept_tac, accept_tac}]
with {:ok, %{code: 200, data: %{"0": %{email: "successful"}}}} <-
Connection.call("ActivateAppendix", params, client) do
:ok
end
end
# Contacts
@doc """
query_contact_list returns a list of all contact handles.
"""
@spec query_contact_list(integer_opt(), integer_opt(), client_opt) :: return
def query_contact_list(offset \\ 0, limit \\ 100, client \\ Client.new()) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: contacts, info: info}} <-
Connection.call("QueryContactList", params, client) do
{:ok, Enum.map(contacts, fn contact -> contact.contact end), info}
end
end
@doc """
get_contact returns a contact handle.
"""
@spec status_contact(String.t(), client_opt) :: return
def status_contact(contact, client \\ Client.new()) do
case Connection.call("StatusContact", [contact: contact], client) do
{:ok, %{code: 200, data: [contact]}} -> {:ok, contact}
{:ok, %{code: 200, data: [contact, %{status: "ok"}]}} -> {:ok, contact}
other -> other
end
end
@doc """
add_contact adds a new contact and returns a contact handle.
"""
@spec add_contact(keyword(), boolean_opt(), boolean_opt(), client_opt) :: return
def add_contact(
contact,
validation \\ true,
pre_verify \\ true,
client \\ Client.new()
) do
params =
contact ++
[
validation: if(validation, do: 1, else: 0),
preverify: if(pre_verify, do: 1, else: 0),
autodelete: 1
]
with {:ok, %{code: 200, data: [contact]}} <- Connection.call("AddContact", params, client) do
{:ok, contact}
end
end
@doc """
modify_contact modifies an existing contact and returns a contact handle.
"""
@spec modify_contact(keyword(), boolean_opt(), boolean_opt(), boolean_opt(), client_opt) ::
return
def modify_contact(
contact,
validation \\ true,
pre_verify \\ false,
check_only \\ false,
client \\ Client.new()
) do
params =
contact ++
[
validation: if(validation, do: 1, else: 0),
preverify: if(pre_verify, do: 1, else: 0),
checkonly: if(check_only, do: 1, else: 0)
]
with {:ok, %{code: 200, data: [contact]}} <-
Connection.call("ModifyContact", params, client) do
{:ok, contact}
end
end
@doc """
delete_contact deletes a given contact.
"""
@spec delete_contact(String.t(), client_opt) :: return
def delete_contact(contact, client \\ Client.new()) do
with {:ok, %{code: 200}} <-
Connection.call("DeleteContact", [contact: contact], client) do
:ok
end
end
@doc """
clone_contact clones the given contact.
"""
@spec clone_contact(String.t(), client_opt) :: return
def clone_contact(contact, client \\ Client.new()) do
with {:ok, %{code: 200, data: [contact]}} <-
Connection.call("CloneContact", [contact: contact], client) do
{:ok, contact}
end
end
@doc """
restore_contact restores a deleted contact.
"""
@spec restore_contact(String.t(), client_opt) :: return
def restore_contact(contact, client \\ Client.new()) do
with {:ok, %{code: 200}} <-
Connection.call("RestoreContact", [contact: contact], client) do
:ok
end
end
@doc """
request_token requests a verification token for the given contact or domain.
"""
@spec request_token(String.t(), client_opt) :: return
def request_token(people_contact_or_domain, client \\ Client.new()) do
params = [contact: people_contact_or_domain, type: "ContactDisclosure"]
with {:ok, %{code: 200}} <- Connection.call("RequestToken", params, client) do
:ok
end
end
# Events
@doc """
delete_event deletes the given event by id.
"""
@spec delete_event(String.t(), client_opt) :: return
def delete_event(event, client \\ Client.new()) do
params = [event: event]
with {:ok, %{code: 200}} <- Connection.call("DeleteEvent", params, client) do
:ok
end
end
@doc """
status_event gets an event by id.
"""
@spec status_event(String.t(), client_opt) :: return
def status_event(event, client \\ Client.new()) do
params = [event: event]
with {:ok, %{code: 200, data: [event]}} <- Connection.call("StatusEvent", params, client) do
{:ok, event}
end
end
@doc """
query_event_list returns a list of events since the given date.
"""
@spec query_event_list(String.t(), keyword() | nil, integer_opt(), integer_opt(), client_opt) ::
return
def query_event_list(
date,
opts \\ [],
offset \\ 0,
limit \\ 1000,
client \\ Client.new()
) do
params = [mindate: date, first: offset, limit: limit] ++ opts
with {:ok, %{code: 200, data: events, info: info}} <-
Connection.call("QueryEventList", params, client) do
{:ok,
Enum.flat_map(events, fn v ->
e = Map.get(v, :event, [])
if is_list(e), do: e, else: [e]
end), info}
end
end
# Domain Tags
@doc """
add_tag adds a tags to be used for tagging domains or zones.
"""
@spec add_tag(String.t(), string_opt(), string_opt(), client_opt) :: return
def add_tag(tag, description \\ "", type \\ "domain", client \\ Client.new()) do
params = [tag: tag, type: type, description: description]
with {:ok, %{code: 200}} <- Connection.call("AddTag", params, client) do
:ok
end
end
@doc """
modify_tag modifies tags by the given tag name for domains and zones.
"""
@spec modify_tag(String.t(), keyword(), string_opt(), client_opt) :: return
def modify_tag(
tag,
params,
type \\ "domain",
client \\ Client.new()
) do
params = [tag: tag, type: type] ++ params
with {:ok, %{code: 200}} <- Connection.call("ModifyTag", params, client) do
:ok
end
end
@doc """
delete_tag deletes a the given tag.
"""
@spec delete_tag(String.t(), string_opt(), client_opt) :: return
def delete_tag(tag, type \\ "domain", client \\ Client.new()) do
params = [tag: tag, type: type]
with {:ok, %{code: 200}} <- Connection.call("DeleteTag", params, client) do
:ok
end
end
@doc """
status_tag gets the given tag by name.
"""
@spec status_tag(String.t(), string_opt(), client_opt) :: return
def status_tag(tag, type \\ "domain", client \\ Client.new()) do
params = [tag: tag, type: type]
with {:ok, %{code: 200, data: [tag]}} <- Connection.call("StatusTag", params, client) do
{:ok, tag}
end
end
@doc """
query_tag_list gets a list of tags.
"""
@spec query_tag_list(string_opt(), integer_opt(), integer_opt(), client_opt) :: return
def query_tag_list(
type \\ "domain",
offset \\ 0,
limit \\ 1000,
client \\ Client.new()
) do
params = [first: offset, limit: limit, type: type]
with {:ok, %{code: 200, data: tags, info: info}} <-
Connection.call("QueryTagList", params, client) do
{:ok,
Enum.flat_map(tags, fn v ->
e = Map.get(v, :tag, [])
if is_list(e), do: e, else: [e]
end), info}
end
end
# Nameservers
@ipaddresses [
:ipaddress0,
:ipaddress1,
:ipaddress2,
:ipaddress3,
:ipaddress4,
:ipaddress5,
:ipaddress6,
:ipaddress7,
:ipaddress8,
:ipaddress9,
:ipaddress10
]
@doc """
add_nameserver adds a nameservers to be used for nameserverging domains or zones.
"""
@spec add_nameserver(String.t(), [String.t()], client_opt) :: return
def add_nameserver(nameserver, ips, client \\ Client.new()) do
params =
ips
|> Enum.with_index()
|> Enum.flat_map(fn {ip, idx} -> [{Enum.at(@ipaddresses, idx), ip}] end)
params = params ++ [nameserver: nameserver]
with {:ok, %{code: 200}} <- Connection.call("AddNameserver", params, client) do
:ok
end
end
@doc """
modify_nameserver modifies nameservers by the given nameserver name for domains and zones.
"""
@spec modify_nameserver(String.t(), [String.t()], client_opt) :: return
def modify_nameserver(nameserver, ips, client \\ Client.new()) do
params =
ips
|> Enum.with_index()
|> Enum.flat_map(fn {ip, idx} -> [{String.to_existing_atom("ipaddress#{idx}"), ip}] end)
params = params ++ [nameserver: nameserver]
with {:ok, %{code: 200}} <- Connection.call("ModifyNameserver", params, client) do
:ok
end
end
@doc """
delete_nameserver deletes a the given nameserver.
"""
@spec delete_nameserver(String.t(), client_opt) :: return
def delete_nameserver(nameserver, client \\ Client.new()) do
params = [nameserver: nameserver]
with {:ok, %{code: 200}} <- Connection.call("DeleteNameserver", params, client) do
:ok
end
end
@doc """
check_nameserver checks a the given nameserver.
"""
@spec check_nameserver(String.t(), client_opt) :: return
def check_nameserver(nameserver, client \\ Client.new()) do
params = [nameserver: nameserver]
with {:ok, %{code: 200}} <- Connection.call("CheckNameserver", params, client) do
:ok
end
end
@doc """
status_nameserver gets the given nameserver by name.
"""
@spec status_nameserver(String.t(), client_opt) :: return
def status_nameserver(nameserver, client \\ Client.new()) do
params = [nameserver: nameserver]
with {:ok, %{code: 200, data: [nameserver]}} <-
Connection.call("StatusNameserver", params, client) do
{:ok, nameserver}
end
end
@doc """
query_nameserver_list gets a list of nameservers.
"""
@spec query_nameserver_list(integer_opt(), integer_opt(), client_opt) :: return
def query_nameserver_list(offset \\ 0, limit \\ 1000, client \\ Client.new()) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: nameservers, info: info}} <-
Connection.call("QueryNameserverList", params, client) do
ret =
nameservers
|> Enum.flat_map(fn ns ->
case Map.get(ns, :nameserver) do
nil -> []
other -> [other]
end
end)
{:ok, ret, info}
end
end
# Domains
@doc """
query_domain_list returns a list of all registerd domains.
"""
@spec query_domain_list(integer_opt(), integer_opt(), client_opt) :: return
def query_domain_list(offset \\ 0, limit \\ 1000, client \\ Client.new()) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: domains, info: info}} <-
Connection.call("QueryDomainList", params, client) do
{:ok, Enum.map(domains, fn v -> v.domain end), info}
end
end
@doc """
check_domain checks wether the given domain name is free.
"""
@spec check_domain(String.t(), client_opt) :: return
def check_domain(domain, client \\ Client.new()) do
params = [domain: domain]
case Connection.call("CheckDomain", params, client) do
{:ok, %{code: 210}} -> {:ok, true}
{:ok, %{code: 211}} -> {:ok, false}
other -> other
end
end
@doc """
status_domain gets the given domain by name.
"""
@spec status_domain(String.t(), client_opt) :: return
def status_domain(domain, client \\ Client.new()) do
params = [domain: domain]
with {:ok, %{code: 200, data: [domain]}} <-
Connection.call("StatusDomain", params, client, false, true) do
{:ok, domain}
end
end
@doc """
add_domain registers a new domain.
"""
@spec add_domain(
String.t(),
String.t(),
String.t(),
String.t(),
String.t(),
[String.t()] | nil,
keyword() | nil,
client_opt
) :: return
def add_domain(
domain,
owner,
admin,
tech,
bill,
nameservers \\ [],
opts \\ [],
client \\ Client.new()
) do
params =
nameservers
|> Enum.with_index()
|> Enum.flat_map(fn {ns, i} -> [{String.to_existing_atom("nameserver#{i}"), ns}] end)
params =
params ++
opts ++
[
domain: domain,
ownercontact0: owner,
admincontact0: admin,
techcontact0: tech,
billingcontact0: bill
]
with {:ok, %{code: 200, data: [data]}} <- Connection.call("AddDomain", params, client) do
{:ok, data}
end
end
@doc """
modify_domain modifies domains by the given domain name for domains and zones.
"""
@spec modify_domain(String.t(), [String.t()] | nil, client_opt) :: return
def modify_domain(domain, attrs \\ [], client \\ Client.new()) do
params = [domain: domain] ++ fix_attrs(attrs)
with {:ok, %{code: 200}} <- Connection.call("ModifyDomain", params, client) do
:ok
end
end
@doc """
delete_domain deletes a registered domain.
"""
@spec delete_domain(String.t(), string_opt(), client_opt) :: return
def delete_domain(domain, action \\ "instant", client \\ Client.new()) do
params = [domain: domain, action: String.upcase(action)]
with {:ok, %{code: 200, data: [data]}} <- Connection.call("DeleteDomain", params, client) do
{:ok, data}
end
end
@doc """
renew_domain renews a registered domain.
"""
@spec renew_domain(String.t(), integer_opt(), client_opt) :: return
def renew_domain(domain, years \\ 1, client \\ Client.new()) do
params = [domain: domain, period: years]
with {:ok, %{code: 200}} <- Connection.call("RenewDomain", params, client) do
:ok
end
end
@doc """
set_auth_code sets the domains auth-code for transfer.
"""
@spec set_domain_auth_code(String.t(), String.t(), client_opt) :: return
def set_domain_auth_code(domain, code, client \\ Client.new()) do
params =
[domain: domain, auth: code, type: 1] ++
if code == "",
do: [action: "delete"],
else: [action: "set"]
with {:ok, %{code: 200}} <- Connection.call("SetAuthcode", params, client) do
:ok
end
end
@doc """
set_domain_renewal_mode sets the domains renewal-mode.
The domains mode for renewals (optional): DEFAULT | AUTORENEW | AUTOEXPIRE | AUTODELETE | RENEWONCE
The domains mode for renewals (only valid for the zone de, optional): DEFAULT | AUTORENEW | AUTORENEWMONTHLY | AUTOEXPIRE | AUTODELETE | RENEWONCE
The domains mode for renewals (only valid for the zone nl, optional): DEFAULT | AUTORENEW | AUTOEXPIRE | AUTODELETE | RENEWONCE | AUTORENEWQUARTERLY
The domains mode for renewals (only valid for the zones com, net, org, info, biz, tv, mobi and me, optional): DEFAULT | AUTORENEW | AUTOEXPIRE | AUTODELETE | RENEWONCE | EXPIREAUCTION
"""
@spec set_domain_renewal_mode(String.t(), string_opt(), string_opt(), client_opt) :: return
def set_domain_renewal_mode(
domain,
mode \\ "default",
token \\ "",
client \\ Client.new()
) do
params =
[domain: domain, renewalmode: mode] ++
if token == "", do: [], else: [token: token]
with {:ok, %{code: 200}} <- Connection.call("SetDomainRenewalMode", params, client) do
:ok
end
end
@doc """
set_domain_transfer_mode sets the domains transfer-mode.
The domains mode for transfers: DEFAULT|AUTOAPPROVE|AUTODENY
"""
@spec set_domain_transfer_mode(String.t(), string_opt(), string_opt(), client_opt) :: return
def set_domain_transfer_mode(
domain,
mode \\ "default",
token \\ "",
client \\ Client.new()
) do
params =
[domain: domain, transfermode: mode] ++
if token == "", do: [], else: [token: token]
with {:ok, %{code: 200}} <- Connection.call("SetDomainTransferMode", params, client) do
:ok
end
end
@doc """
restore_domain restores a registered domain.
"""
@spec restore_domain(String.t(), client_opt) :: return
def restore_domain(domain, client \\ Client.new()) do
params = [domain: domain]
with {:ok, %{code: 200}} <- Connection.call("RestoreDomain", params, client) do
:ok
end
end
@doc """
status_owner_change explicity checks the status of an OwnerChange in detail.
"""
@spec status_owner_change(String.t(), client_opt) :: return
def status_owner_change(domain, client \\ Client.new()) do
params = [domain: domain]
with {:ok, %{code: 200, data: [data]}} <-
Connection.call("StatusOwnerChange", params, client) do
{:ok, data}
end
end
@doc """
get_zone returns the correct zone for the given domainname.
"""
@spec get_zone(String.t(), client_opt) :: return
def get_zone(domain, client \\ Client.new()) do
params = [domain: domain]
with {:ok, %{code: 200, data: [data]}} <- Connection.call("GetZone", params, client) do
{:ok, data.zone}
end
end
@doc """
get_zone_info returns zone information for the given zone.
"""
@spec get_zone_info(String.t(), client_opt) :: return
def get_zone_info(domain, client \\ Client.new()) do
params = [domain: domain]
with {:ok, %{code: 200, data: [data]}} <-
Connection.call("GetZoneInfo", params, client, true) do
{:ok, data}
end
end
# transfers
@doc """
transfer_domain transfers a foreign domain into our account.
"""
@spec transfer_domain(
String.t(),
string_opt(),
string_opt(),
string_opt(),
string_opt(),
string_opt(),
string_opt(),
[String.t()] | nil,
keyword() | nil,
client_opt
) :: return
def transfer_domain(
domain,
action \\ "request",
auth \\ "",
owner \\ "",
admin \\ "",
tech \\ "",
bill \\ "",
nameservers \\ [],
opts \\ [],
client \\ Client.new()
) do
params =
nameservers
|> Enum.with_index()
|> Enum.flat_map(fn {ns, i} -> [{String.to_existing_atom("nameserver#{i}"), ns}] end)
params =
params ++
[domain: domain, action: action] ++
opts ++
if(auth == "", do: [], else: [auth: auth]) ++
if(owner == "", do: [], else: [ownercontact0: owner]) ++
if(admin == "", do: [], else: [admincontact0: admin]) ++
if(tech == "", do: [], else: [techcontact0: tech]) ++
if(bill == "", do: [], else: [billingcontact0: bill])
with {:ok, %{code: 200}} <- Connection.call("TransferDomain", params, client) do
:ok
end
end
@doc """
query_transfer_list returns a list of local transfers.
"""
@spec query_transfer_list(integer_opt(), integer_opt(), client_opt) :: return
def query_transfer_list(offset \\ 0, limit \\ 2000, client \\ Client.new()) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: data, info: info}} <-
Connection.call("QueryTransferList", params, client) do
{:ok, data, info}
end
end
@doc """
query_foreign_transfer_list returns a list of foreign transfers.
"""
@spec query_foreign_transfer_list(integer_opt(), integer_opt(), client_opt) :: return
def query_foreign_transfer_list(
offset \\ 0,
limit \\ 2000,
client \\ Client.new()
) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: data, info: info}} <-
Connection.call("QueryForeignTransferList", params, client) do
{:ok, data, info}
end
end
@doc """
status_domain_transfer command informs you about the current status of a transfer.
You can check if the transfer was successfully initiated or who received the eMail to confirm a transfer.
"""
@spec status_domain_transfer(String.t(), client_opt) :: return
def status_domain_transfer(domain, client \\ Client.new()) do
params = [domain: domain]
with {:ok, %{code: 200, data: [data]}} <-
Connection.call("StatusDomainTransfer", params, client) do
{:ok, data}
end
end
# Finance
@doc """
query_zone_list returns the prices per zone.
"""
@spec query_zone_list(integer_opt(), integer_opt(), client_opt) :: return
def query_zone_list(offset \\ 0, limit \\ 2000, client \\ Client.new()) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: prices, info: info}} <-
Connection.call("QueryZoneList", params, client) do
{:ok, prices, info}
end
end
@doc """
query_accounting_list returns all items for accounting since the given date.
"""
@spec query_accounting_list(String.t(), integer_opt(), integer_opt(), client_opt) :: return
def query_accounting_list(
date,
offset \\ 0,
limit \\ 2000,
client \\ Client.new()
) do
params = [mindate: date, first: offset, limit: limit]
with {:ok, %{code: 200, data: data, info: info}} <-
Connection.call("QueryAccountingList", params, client) do
{:ok, data, info}
end
end
@doc """
query_upcoming_accounting_list returns all items that are upcoming for accounting.
"""
@spec query_upcoming_accounting_list(integer_opt(), integer_opt(), client_opt) :: return
def query_upcoming_accounting_list(
offset \\ 0,
limit \\ 2000,
client \\ Client.new()
) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: data, info: info}} <-
Connection.call("QueryUpcomingAccountingList", params, client) do
{:ok, data, info}
end
end
@doc """
convert_currency converts the currency according to their current rates.
"""
@spec convert_currency(any(), String.t(), string_opt(), client_opt) :: return
def convert_currency(amount, from, to \\ "EUR", client \\ Client.new()) do
params = [amount: amount, from: from, to: to]
with {:ok, %{code: 200, data: [conv]}} <-
Connection.call("ConvertCurrency", params, client) do
{:ok, conv.converted_amount, conv.rate}
end
end
@doc """
query_available_promotion_list returns all available promotions.
"""
@spec query_available_promotion_list(integer_opt(), integer_opt(), client_opt) :: return
def query_available_promotion_list(
offset \\ 0,
limit \\ 2000,
client \\ Client.new()
) do
params = [first: offset, limit: limit]
with {:ok, %{code: 200, data: data, info: info}} <-
Connection.call("QueryAvailablePromotionList", params, client) do
{:ok, data, info}
end
end
end
|
lib/rrpproxy.ex
| 0.886285
| 0.836087
|
rrpproxy.ex
|
starcoder
|
defmodule Membrane.Element.LiveAudioMixer do
@moduledoc """
An element producing live audio stream by mixing a dynamically changing
set of input streams.
When the mixer goes to `:playing` state it sends some silence
(configured by `out_delay` see the [docs for options](#module-element-options)).
From that moment, after each interval of time the mixer takes the data received from
upstream elements and produces audio with the duration equal to the interval.
If some upstream element fails to deliver enough samples for the whole
interval to be mixed, its data is dropped (including the data that
comes later but was supposed to be mixed in the current interval).
If none of the inputs provide enough data, the mixer will generate silence.
"""
use Bunch
use Membrane.Log, tags: :membrane_element_live_audiomixer
use Membrane.Filter
alias Membrane.{Buffer, Event, Time}
alias Membrane.Caps.Audio.Raw, as: Caps
alias Membrane.Common.AudioMix
alias Membrane.Element.LiveAudioMixer.Timer
@default_mute_val false
def_options interval: [
type: :time,
description: """
Defines an interval of time between each mix of
incoming streams. The actual interval used might be rounded up
to make sure the number of frames generated for this time period
is an integer.
For example, for sample rate 44 100 Hz the interval will be
rounded to a multiple of 10 ms.
See the moduledoc (`#{inspect(__MODULE__)}`) for details on how the interval is used.
""",
default: 500 |> Time.millisecond()
],
in_delay: [
type: :time,
description: """
A delay before the input streams are mixed for the first time.
""",
default: 200 |> Time.millisecond()
],
out_delay: [
type: :time,
description: """
Duration of additional silence sent before first buffer with mixed audio.
Effectively delays the mixed output stream: this delay will
be a difference between the total duration of the produced audio and
consumed by sink.
It compensates for the time the buffers need
to reach the sink after being sent from mixer and prevents 'cracks'
produced on every interval because of audio samples being late.
""",
default: 50 |> Time.millisecond()
],
caps: [
type: :struct,
spec: Caps.t(),
description: """
The value defines a raw audio format of pads connected to the
element. It should be the same for all the pads.
"""
],
timer: [
type: :module,
description: """
Module implementing `#{inspect(Timer)}` behaviour used as timer for ticks.
""",
default: Timer.Erlang
]
def_output_pad :output, mode: :push, caps: Caps
def_input_pad :input,
availability: :on_request,
demand_unit: :bytes,
caps: Caps,
options: [
mute: [
type: :boolean,
spec: boolean(),
default: @default_mute_val,
description: """
Determines whether the pad will be muted from the start.
"""
]
]
@impl true
def handle_init(%__MODULE__{caps: caps, interval: interval} = options) when interval > 0 do
second = Time.second(1)
base = div(second, Integer.gcd(second, caps.sample_rate))
# An interval has to:
# - be an integer
# - correspond to an integer number of frames
# to make sure there is no rounding when calculating a demand for each interval
# It is ensured if interval is divisible by base
interval = trunc(Float.ceil(interval / base)) * base
state =
options
|> Map.from_struct()
|> Map.merge(%{
interval: interval,
outputs: %{},
next_tick_time: nil,
timer_ref: nil
})
{:ok, state}
end
@impl true
def handle_prepared_to_playing(_ctx, state) do
%{
interval: interval,
in_delay: in_delay,
timer: timer
} = state
{:ok, timer_ref} = timer.start_sender(self(), interval, in_delay)
state = %{
state
| timer_ref: timer_ref,
next_tick_time: timer.current_time() + in_delay
}
{{:ok, generate_demands(state)}, state}
end
@impl true
def handle_playing_to_prepared(_ctx, state) do
%{
timer_ref: timer_ref,
timer: timer,
outputs: outputs
} = state
timer_ref |> timer.stop_sender()
outputs =
outputs
|> Enum.map(fn {pad, data} ->
{pad, %{data | queue: <<>>, skip: 0, mute: @default_mute_val}}
end)
|> Map.new()
{:ok, %{state | timer_ref: nil, outputs: outputs}}
end
@impl true
def handle_pad_added(pad, ctx, state) do
state =
state
|> Bunch.Access.put_in([:outputs, pad], %{
queue: <<>>,
sos: false,
eos: false,
skip: 0,
mute: ctx.options[:mute]
})
{:ok, state}
end
@impl true
def handle_event(pad, %Event.StartOfStream{}, _ctx, state) do
%{next_tick_time: next, caps: caps, interval: interval} = state
default_demand = state |> get_default_demand
now = state.timer.current_time()
time_to_tick = (next - now) |> max(0)
{demand, state} =
if time_to_tick < interval do
silent_prefix = caps |> Caps.sound_of_silence(interval - time_to_tick)
state = state |> Bunch.Access.put_in([:outputs, pad, :queue], silent_prefix)
{default_demand - byte_size(silent_prefix), state}
else
# possible if in_delay is greater than interval
to_skip = (time_to_tick - interval) |> Caps.time_to_bytes(caps)
state = state |> Bunch.Access.put_in([:outputs, pad, :skip], to_skip)
{to_skip + default_demand, state}
end
state = state |> Bunch.Access.put_in([:outputs, pad, :sos], true)
{{:ok, demand: {pad, demand}}, state}
end
def handle_event(pad, %Event.EndOfStream{}, _context, state) do
state = state |> Bunch.Access.put_in([:outputs, pad, :eos], true)
{:ok, state}
end
def handle_event(_pad, _event, _context, state) do
{:ok, state}
end
@impl true
def handle_process(pad, buffer, _context, state) do
%Buffer{payload: payload} = buffer
state =
state
|> Bunch.Access.update_in([:outputs, pad], fn %{queue: queue, skip: skip} = data ->
to_skip = min(skip, payload |> byte_size)
<<_skipped::binary-size(to_skip), payload::binary>> = payload
%{data | queue: queue <> payload, skip: skip - to_skip}
end)
{:ok, state}
end
@impl true
def handle_other(
{:tick, _time} = tick,
%{playback_state: :playing} = ctx,
%{out_delay: out_delay, caps: caps} = state
)
when out_delay > 0 do
silence =
caps
|> Caps.sound_of_silence(out_delay)
~> {:buffer, {:output, %Buffer{payload: &1}}}
with {{:ok, actions}, state} <- handle_other(tick, ctx, %{state | out_delay: 0}) do
{{:ok, [silence | actions]}, state}
end
end
def handle_other({:tick, time}, %{playback_state: :playing}, state) do
%{
outputs: outputs
} = state
payload = state |> mix_tracks
demand = state |> get_default_demand
outputs = outputs |> skip_queued_data(demand)
state = %{state | outputs: outputs, next_tick_time: time}
demands = state |> generate_demands
actions = [{:buffer, {:output, %Buffer{payload: payload}}} | demands]
{{:ok, actions}, state}
end
def handle_other({:mute, pad_ref}, _ctx, state) do
do_mute(pad_ref, true, state)
end
def handle_other({:unmute, pad_ref}, _ctx, state) do
do_mute(pad_ref, false, state)
end
def handle_other(_message, _ctx, state) do
{:ok, state}
end
defp do_mute(pad_ref, mute?, %{outputs: outputs} = state) do
state =
if outputs |> Map.has_key?(pad_ref) do
state |> Bunch.Access.put_in([:outputs, pad_ref, :mute], mute?)
else
warn("Unmute error: No such pad #{inspect(pad_ref)}")
state
end
{:ok, state}
end
defp mix_tracks(state) do
%{
interval: interval,
caps: caps,
outputs: outputs
} = state
demand = state |> get_default_demand
outputs
|> Enum.reject(fn {_pad, %{mute: mute}} -> mute end)
|> Enum.map(fn {_pad, %{queue: queue}} -> queue end)
|> Enum.filter(&(byte_size(&1) == demand))
~>> ([] -> [caps |> Caps.sound_of_silence(interval)])
|> AudioMix.mix_tracks(caps)
end
defp skip_queued_data(outputs, to_skip) do
outputs
|> Enum.filter(fn {_pad, %{eos: eos}} -> not eos end)
|> Enum.map(fn {pad, %{sos: started?, queue: queue, skip: old_skip} = data} ->
if started? do
skip = old_skip + to_skip - byte_size(queue)
{pad, %{data | queue: <<>>, skip: skip}}
else
{pad, data}
end
end)
|> Map.new()
end
defp generate_demands(state) do
demand = get_default_demand(state)
state.outputs
|> Enum.flat_map(fn {pad, %{skip: skip, sos: started?}} ->
if started? do
[demand: {pad, demand + skip}]
else
[]
end
end)
end
defp get_default_demand(%{interval: interval, caps: caps}) do
interval |> Caps.time_to_bytes(caps)
end
end
|
lib/mixer.ex
| 0.899865
| 0.657043
|
mixer.ex
|
starcoder
|
defmodule Membrane.AudioInterleaver do
@moduledoc """
Element responsible for interleaving several mono audio streams into single interleaved stream.
All input streams should be in the same raw audio format, defined by `input_caps` option.
Channels are interleaved in order given in `order` option - currently required, no default available.
Each input pad should be identified with your custom id (using `via_in(Pad.ref(:input, your_example_id)` )
"""
use Membrane.Filter
use Bunch
require Membrane.Logger
alias Membrane.AudioInterleaver.DoInterleave
alias Membrane.Buffer
alias Membrane.Caps.Audio.Raw
def_options input_caps: [
type: :struct,
spec: Raw.t(),
description: """
The value defines a raw audio format of pads connected to the
element. It should be the same for all the pads.
""",
default: nil
],
frames_per_buffer: [
type: :integer,
spec: pos_integer(),
description: """
Assumed number of raw audio frames in each buffer.
Used when converting demand from buffers into bytes.
""",
default: 2048
],
order: [
type: :list,
spec: [any()],
description: """
Order in which channels should be interleaved
"""
]
def_output_pad :output,
mode: :pull,
availability: :always,
caps: Raw
def_input_pad :input,
mode: :pull,
availability: :on_request,
demand_unit: :bytes,
caps: {Raw, channels: 1},
options: [
offset: [
spec: Time.t(),
default: 0,
description: "Offset of the input audio at the pad."
]
]
@impl true
def handle_init(%__MODULE__{} = options) do
state =
options
|> Map.from_struct()
|> Map.merge(%{
pads: %{},
channels: length(options.order)
})
{:ok, state}
end
@impl true
def handle_pad_added(pad, %{playback_state: :stopped}, state) do
state = put_in(state, [:pads, pad], %{queue: <<>>, stream_ended: false})
{:ok, state}
end
@impl true
def handle_pad_added(_pad, %{playback_state: playback_state}, _state) do
raise("All pads should be connected before starting the element!
Pad added event received in playback state #{playback_state}.")
end
@impl true
def handle_pad_removed(pad, _context, state) do
state = Bunch.Access.delete_in(state, [:pads, pad])
{:ok, state}
end
@impl true
def handle_prepared_to_playing(
_context,
%{input_caps: %Raw{} = input_caps, channels: channels} = state
) do
{{:ok, caps: {:output, %Raw{input_caps | channels: channels}}}, state}
end
@impl true
def handle_prepared_to_playing(_context, %{input_caps: nil} = state) do
{:ok, state}
end
@impl true
def handle_demand(:output, size, :bytes, _context, %{channels: channels} = state) do
do_handle_demand(div(size, channels), state)
end
@impl true
def handle_demand(:output, _buffers_count, :buffers, _context, %{input_caps: nil} = state) do
{:ok, state}
end
@impl true
def handle_demand(
:output,
buffers_count,
:buffers,
_context,
%{frames_per_buffer: frames, input_caps: input_caps} = state
) do
size = buffers_count * Raw.frames_to_bytes(frames, input_caps)
do_handle_demand(size, state)
end
@impl true
def handle_start_of_stream(pad, context, state) do
offset = context.pads[pad].options.offset
silence = Raw.sound_of_silence(state.input_caps, offset)
state =
Bunch.Access.update_in(
state,
[:pads, pad],
&%{&1 | queue: silence}
)
demand_fun = &max(0, &1 - byte_size(silence))
{buffer, state} = interleave(state, min_open_queue_size(state.pads))
{{:ok, demand: {pad, demand_fun}, buffer: buffer}, state}
end
@impl true
def handle_end_of_stream(pad, _context, state) do
state = put_in(state, [:pads, pad, :stream_ended], true)
all_streams_ended =
state.pads
|> Enum.map(fn {_pad, %{stream_ended: stream_ended}} -> stream_ended end)
|> Enum.all?()
if all_streams_ended do
{buffer, state} = interleave(state, longest_queue_size(state.pads))
{{:ok, buffer: buffer, end_of_stream: :output}, state}
else
{buffer, state} = interleave(state, min_open_queue_size(state.pads))
{{:ok, buffer: buffer}, state}
end
end
@impl true
def handle_event(pad, event, _context, state) do
Membrane.Logger.debug("Received event #{inspect(event)} on pad #{inspect(pad)}")
{:ok, state}
end
@impl true
def handle_process(
pad,
%Buffer{payload: payload},
_context,
%{input_caps: input_caps} = state
) do
{new_queue_size, state} = enqueue_payload(payload, pad, state)
if new_queue_size >= Raw.sample_size(input_caps) do
{buffer, state} = interleave(state, min_open_queue_size(state.pads))
{{:ok, buffer: buffer}, state}
else
{{:ok, redemand: :output}, state}
end
end
@impl true
def handle_caps(_pad, input_caps, _context, %{input_caps: nil} = state) do
state = %{state | input_caps: input_caps}
{{:ok, caps: {:output, %{input_caps | channels: state.channels}}, redemand: :output}, state}
end
@impl true
def handle_caps(_pad, input_caps, _context, %{input_caps: input_caps} = state) do
{:ok, state}
end
@impl true
def handle_caps(pad, input_caps, _context, state) do
raise(
RuntimeError,
"received invalid caps on pad #{inspect(pad)}, expected: #{inspect(state.input_caps)}, got: #{inspect(input_caps)}"
)
end
# send demand to input pads that don't have a long enough queue
defp do_handle_demand(size, %{pads: pads} = state) do
pads
|> Enum.map(fn {pad, %{queue: queue}} ->
queue
|> byte_size()
|> then(&{:demand, {pad, max(0, size - &1)}})
end)
|> then(fn demands -> {{:ok, demands}, state} end)
end
defp interleave(%{input_caps: input_caps, pads: pads, order: order} = state, n_bytes) do
sample_size = Raw.sample_size(input_caps)
n_bytes = trunc_to_whole_samples(n_bytes, sample_size)
if n_bytes >= sample_size do
pads = append_silence_if_needed(input_caps, pads, n_bytes)
{payload, pads} = DoInterleave.interleave(n_bytes, sample_size, pads, order)
buffer = {:output, %Buffer{payload: payload}}
{buffer, %{state | pads: pads}}
else
{{:output, []}, state}
end
end
# append silence to each queue shorter than min_length
defp append_silence_if_needed(caps, pads, min_length) do
pads
|> Enum.map(fn {pad, %{queue: queue} = pad_value} ->
{pad, %{pad_value | queue: do_append_silence(queue, min_length, caps)}}
end)
|> Map.new()
end
defp do_append_silence(queue, length_bytes, caps) do
missing_frames = ceil((length_bytes - byte_size(queue)) / Raw.frame_size(caps))
if missing_frames > 0 do
silence = caps |> Raw.sound_of_silence() |> String.duplicate(missing_frames)
queue <> silence
else
queue
end
end
# Returns minimum number of bytes present in all queues that haven't yet received end_of_stream message
defp min_open_queue_size(pads) do
pads
|> Enum.reject(fn {_pad, %{stream_ended: stream_ended}} -> stream_ended end)
|> Enum.map(fn {_pad, %{queue: queue}} -> byte_size(queue) end)
|> Enum.min(fn -> 0 end)
end
defp longest_queue_size(pads) do
pads
|> Enum.map(fn {_pad, %{queue: queue}} -> byte_size(queue) end)
|> Enum.max(fn -> 0 end)
end
# Returns the biggest multiple of `sample_size` that is not bigger than `size`
defp trunc_to_whole_samples(size, sample_size)
when is_integer(size) and is_integer(sample_size) do
rest = rem(size, sample_size)
size - rest
end
# add payload to proper pad's queue
defp enqueue_payload(payload, pad_key, %{pads: pads} = state) do
{new_queue_size, pads} =
Map.get_and_update(
pads,
pad_key,
fn %{queue: queue} = pad ->
{byte_size(queue) + byte_size(payload), %{pad | queue: queue <> payload}}
end
)
{new_queue_size, %{state | pads: pads}}
end
end
|
lib/membrane_audio_interleaver.ex
| 0.895344
| 0.509947
|
membrane_audio_interleaver.ex
|
starcoder
|
defmodule MangoPay do
@moduledoc """
The elixir client for MangoPay API.
This module is the root of all the application.
## Configuring
Set your API key by configuring the :mangopay application.
```
config :mangopay, :client, id: YOUR_MANGOPAY_CLIENT_ID
config :mangopay, :client, passphrase: <PASSWORD>
```
"""
@base_header %{"User-Agent": "Elixir", "Content-Type": "application/json"}
@payline_header %{"Accept-Encoding": "gzip;q=1.0,deflate;q=0.6,identity;q=0.3", "Accept": "*/*", "Host": "homologation-webpayment.payline.com"}
def base_header do
@base_header
end
@doc """
Returns MANGOPAY_BASE_URL
"""
def base_url do
case MangoPay.client()[:env] do
:sandbox -> "https://api.sandbox.mangopay.com"
:prod -> "https://api.mangopay.com"
end
end
@doc """
Returns MANGOPAY_CLIENT
"""
def client do
Application.get_env(:mangopay, :client)
end
def mangopay_version do
"v2.01"
end
def mangopay_version_and_client_id do
"/#{mangopay_version()}/#{MangoPay.client()[:id]}"
end
@doc """
Request to mangopay web API.
## Examples
response = MangoPay.request!("get", "users")
"""
def request! {method, url, body, headers} do
request!(method, url, body, headers)
end
def request! {_method, url, query} do
request!(:get, url, "", "", query)
end
def request!(method, url, body \\ "", headers \\ "", query \\ %{}) do
{method, url, body, headers, _} = full_header_request(method, url, body, headers, query)
filter_and_send(method, url, body, headers, query, true)
|> decode_body()
|> underscore_map()
end
@doc """
Request to mangopay web API.
## Examples
{:ok, response} = MangoPay.request({"get", "users", nil, nil})
"""
def request {method, url, body, headers} do
request(method, url, body, headers)
end
def request {_method, url, query} do
request(:get, url, "", "", query)
end
@doc """
Request to mangopay web API.
## Examples
{:ok, response} = MangoPay.request("get", "users")
"""
def request(method, url, body \\ "", headers \\ "", query \\ %{}) do
{method, url, body, headers, query} = full_header_request(method, url, body, headers, query)
filter_and_send(method, url, body, headers, query, false)
|> tuple_result_for_request
end
defp tuple_result_for_request({:ok, %{status_code: 200} = response}) do
{:ok, decode_body(response) |> underscore_map()}
end
defp tuple_result_for_request({:ok, response}) do
{:error, decode_body(response) |> underscore_map()}
end
defp tuple_result_for_request(request), do: request
defp decode_body(%{body: body}) do
case Poison.decode(body) do
{:ok, decoded_body} -> decoded_body
{:error, _} -> body
end
end
defp underscore_map(%{} = map) do
Enum.reduce(map, %{}, fn({k, v}, acc) ->
underscored_key = underscore_word(k) |> String.to_atom
cond do
is_map(v) -> Map.put_new(acc, underscored_key, underscore_map(v))
true -> Map.put_new(acc, underscored_key, v)
end
end)
end
defp underscore_map(result) when is_list(result), do: Enum.map(result, &(underscore_map(&1)))
defp underscore_map(result), do: result
defp underscore_word(word) do
word
|> String.replace(~r/([A-Z]+)([A-Z][a-z])/, "\\1_\\2")
|> String.replace(~r/([a-z\d])([A-Z])/, "\\1_\\2")
|> String.replace(~r/-/, "_")
|> String.downcase
end
def camelize_map(%{} = map) do
Enum.reduce(map, %{}, fn({k, v}, acc) ->
camelized_key = camelize_word(k)
cond do
is_map(v) -> Map.put_new(acc, camelized_key, camelize_map(v))
true -> Map.put_new(acc, camelized_key, v)
end
end)
end
def camelize_word(word) do
case Regex.split(~r/(?:^|[-_])|(?=[A-Z])/, to_string(word)) do
words ->
words |> Enum.filter(&(&1 != "")) |> camelize_list()
|> Enum.join()
end
end
defp camelize_list([]), do: []
defp camelize_list([h|tail]) do
[String.capitalize(h)] ++ camelize_list(tail)
end
defp full_header_request(method, url, body, headers, query) do
{method, url, decode_map(body), headers, query}
|> authorization_params()
|> payline_params()
end
defp authorization_params {method, url, body, headers, query} do
headers = case headers do
%{"Authorization": _} -> headers
_ -> Map.merge(base_header(), %{"Authorization": "#{MangoPay.Authorization.pull_token()}"})
end
{method, url, body, headers, query}
end
defp payline_params {method, url, body, headers, query} do
if String.contains?(url, "payline") do
{method, url, body, cond_payline(headers), query}
else
{method, cond_mangopay(url), body, headers, query}
end
end
defp cond_payline headers do
headers
|> Map.update!(:"Content-Type", fn _ -> "application/x-www-form-urlencoded" end)
|> Map.merge(@payline_header)
end
defp cond_mangopay url do
base_url() <> mangopay_version_and_client_id() <> url
end
defp decode_map(body) when is_map(body) do
body
|> camelize_map()
|> Poison.encode!
end
defp decode_map(body) when is_list(body), do: Poison.encode!(body)
defp decode_map(body) when is_binary(body), do: body
# default request send to mangopay
defp filter_and_send(method, url, body, headers, query, true) do
case Mix.env do
:test -> HTTPoison.request!(method, url, body, headers, [params: query, timeout: 500000, recv_timeout: 500000])
_ -> HTTPoison.request!(method, url, body, headers, [params: query, timeout: 4600, recv_timeout: 5000])
end
end
defp filter_and_send(method, url, body, headers, query, _bang) do
case Mix.env do
:test -> HTTPoison.request(method, url, body, headers, [params: query, timeout: 500000, recv_timeout: 500000])
_ -> HTTPoison.request(method, url, body, headers, [params: query, timeout: 4600, recv_timeout: 5000])
end
end
end
|
lib/mango_pay.ex
| 0.765418
| 0.621684
|
mango_pay.ex
|
starcoder
|
defmodule Exile do
@moduledoc """
Exile is an alternative for beam ports with back-pressure and non-blocking IO
"""
use Application
@doc false
def start(_type, _args) do
opts = [
name: Exile.WatcherSupervisor,
strategy: :one_for_one
]
# we use DynamicSupervisor for cleaning up external processes on
# :init.stop or SIGTERM
DynamicSupervisor.start_link(opts)
end
@doc """
Runs the given command with arguments and return an Enumerable to read command output.
First parameter must be a list containing command with arguments. example: `["cat", "file.txt"]`.
### Options
* `input` - Input can be either an `Enumerable` or a function which accepts `Collectable`.
* Enumerable:
```
# List
Exile.stream!(~w(base64), input: ["hello", "world"]) |> Enum.to_list()
# Stream
Exile.stream!(~w(cat), input: File.stream!("log.txt", [], 65536)) |> Enum.to_list()
```
* Collectable:
If the input in a function with arity 1, Exile will call that function with a `Collectable` as the argument. The function must *push* input to this collectable. Return value of the function is ignored.
```
Exile.stream!(~w(cat), input: fn sink -> Enum.into(1..100, sink, &to_string/1) end)
|> Enum.to_list()
```
By defaults no input will be given to the command
* `exit_timeout` - Duration to wait for external program to exit after completion before raising an error. Defaults to `:infinity`
* `chunk_size` - Maximum size of each iodata chunk emitted by stream. Chunk size will be variable depending on the amount of data availble at that time. Defaults to 65535
* `use_stderr` - When set to true, stream will contain stderr output along with stdout output. Element of the stream will be of the form `{:stdout, iodata}` or `{:stderr, iodata}` to differentiate different streams. Defaults to false. See example below
All other options are passed to `Exile.Process.start_link/2`
### Examples
```
Exile.stream!(~w(ffmpeg -i pipe:0 -f mp3 pipe:1), input: File.stream!("music_video.mkv", [], 65535))
|> Stream.into(File.stream!("music.mp3"))
|> Stream.run()
```
Stream with stderr
```
Exile.stream!(~w(ffmpeg -i pipe:0 -f mp3 pipe:1),
input: File.stream!("music_video.mkv", [], 65535),
use_stderr: true
)
|> Stream.transform(
fn ->
File.open!("music.mp3", [:write, :binary])
end,
fn elem, file ->
case elem do
{:stdout, data} ->
:ok = IO.binwrite(file, data)
{:stderr, msg} ->
:ok = IO.write(msg)
end
{[], file}
end,
fn file ->
:ok = File.close(file)
end
)
|> Stream.run()
```
"""
@type collectable_func() :: (Collectable.t() -> any())
@spec stream!(nonempty_list(String.t()),
input: Enum.t() | collectable_func(),
exit_timeout: timeout(),
max_chunk_size: pos_integer()
) :: ExCmd.Stream.t()
def stream!(cmd_with_args, opts \\ []) do
Exile.Stream.__build__(cmd_with_args, opts)
end
end
|
lib/exile.ex
| 0.808294
| 0.882782
|
exile.ex
|
starcoder
|
defmodule EctoFacade.Repo do
@moduledoc """
Facade repository that should be used for all operations regarding ecto.
It forwards all write/update/delete operations to `master_repo` and do all read operations on one of `read_repos` - which read repository is depending on the algorithm.
Should be used as:
use EctoFacade.Repo, master_repo: MyApp.Repo,
read_repos: [MyApp.ReadRepoOne, MyApp.ReadRepoTwo],
algorithm: MyApp.CustomReadRepoAlgorithm
fallback_to_master: false
Possible options:
- `master_repo` - only option that is required, it should be main ecto repository used for writes (and reads if you use only one ecto repository)
- `read_repos` - (optional) list of repositories that should be used for read operations. Defaults to [master_repo].
- `algorithm` - (optional) Module that adhere to EctoFacade.Algorithm behaviour. Defaults to EctoFacade.Algorithms.Random
- `fallback_to_master` - (optional) When no read repository is present, should query fallback to master repo (default: true)
"""
@doc false
defmacro __using__(opts) do
quote bind_quoted: [opts: opts] do
master_repo = Keyword.get(opts, :master_repo)
if master_repo == nil do
raise ArgumentError,
"Master repository should be provided to modules using EctoFacade.Repo"
end
algorithm = Keyword.get(opts, :algorithm, EctoFacade.Algorithms.Random)
fallback_to_master = Keyword.get(opts, :fallback_to_master, true)
@master_repo master_repo
@read_repos Keyword.get(opts, :read_repos, [master_repo])
@algorithm algorithm
@fallback_to_master fallback_to_master
unless is_list(@read_repos) do
raise ArgumentError, "read_repos should be a list of repositories"
end
def master_repo, do: @master_repo
def read_repos, do: @read_repos
# Master repo write/update/delete operations
defdelegate insert_all(schema_or_source, entries, opts \\ []), to: @master_repo
defdelegate update_all(queryable, updates, opts \\ []), to: @master_repo
defdelegate delete_all(queryable, opts \\ []), to: @master_repo
defdelegate insert(struct, opts \\ []), to: @master_repo
defdelegate update(struct, opts \\ []), to: @master_repo
defdelegate insert_or_update(changeset, opts \\ []), to: @master_repo
defdelegate delete(struct, opts \\ []), to: @master_repo
defdelegate insert!(struct, opts \\ []), to: @master_repo
defdelegate update!(struct, opts \\ []), to: @master_repo
defdelegate insert_or_update!(changeset, opts \\ []), to: @master_repo
defdelegate delete!(struct, opts \\ []), to: @master_repo
if function_exported?(@master_repo.__adapter__, :transaction, 3) do
defdelegate transaction(fun_or_multi, opts \\ []), to: @master_repo
defdelegate in_transaction?(), to: @master_repo
defdelegate rollback(value), to: @master_repo
end
# Read repos operations
if @fallback_to_master do
def all(queryable, opts \\ []) do
try do
get_read_repo().all(queryable, opts)
catch
_ -> @master_repo.all(queryable, opts)
end
end
def stream(queryable, opts \\ []) do
try do
get_read_repo().stream(queryable, opts)
catch
_ -> @master_repo.stream(queryable, opts)
end
end
def get(queryable, id, opts \\ []) do
try do
get_read_repo().get(queryable, id, opts)
catch
_ -> @master_repo.get(queryable, id, opts)
end
end
def get!(queryable, id, opts \\ []) do
try do
get_read_repo().get!(queryable, id, opts)
catch
_ -> @master_repo.get!(queryable, id, opts)
end
end
def get_by(queryable, clauses, opts \\ []) do
try do
get_read_repo().get_by(queryable, clauses, opts)
catch
_ -> @master_repo.get_by(queryable, clauses, opts)
end
end
def get_by!(queryable, clauses, opts \\ []) do
try do
get_read_repo().get_by!(queryable, clauses, opts)
catch
_ -> @master_repo.get_by!(queryable, clauses, opts)
end
end
def one(queryable, opts \\ []) do
try do
get_read_repo().one(queryable, opts)
catch
_ -> @master_repo.one(queryable, opts)
end
end
def one!(queryable, opts \\ []) do
try do
get_read_repo().one!(queryable, opts)
catch
_ -> @master_repo.one!(queryable, opts)
end
end
def aggregate(queryable, aggregate, field, opts \\ [])
when aggregate in [:count, :avg, :max, :min, :sum] and is_atom(field) do
try do
get_read_repo().aggregate(queryable, aggregate, field, opts)
catch
_ -> @master_repo.aggregate(queryable, aggregate, field, opts)
end
end
def preload(struct_or_structs_or_nil, preloads, opts \\ []) do
try do
get_read_repo().preload(struct_or_structs_or_nil, preloads, opts)
catch
_ -> @master_repo.preload(struct_or_structs_or_nil, preloads, opts)
end
end
def load(schema_or_types, data) do
try do
get_read_repo().load(schema_or_types, data)
catch
_ -> @master_repo.load(schema_or_types, data)
end
end
else
def all(queryable, opts \\ []) do
get_read_repo().all(queryable, opts)
end
def stream(queryable, opts \\ []), do: get_read_repo().stream(queryable, opts)
def get(queryable, id, opts \\ []), do: get_read_repo().get(queryable, id, opts)
def get!(queryable, id, opts \\ []), do: get_read_repo().get!(queryable, id, opts)
def get_by(queryable, clauses, opts \\ []),
do: get_read_repo().get_by(queryable, clauses, opts)
def get_by!(queryable, clauses, opts \\ []),
do: get_read_repo().get_by!(queryable, clauses, opts)
def one(queryable, opts \\ []), do: get_read_repo().one(queryable, opts)
def one!(queryable, opts \\ []), do: get_read_repo().one!(queryable, opts)
def aggregate(queryable, aggregate, field, opts \\ [])
when aggregate in [:count, :avg, :max, :min, :sum] and is_atom(field) do
get_read_repo().aggregate(queryable, aggregate, field, opts)
end
def preload(struct_or_structs_or_nil, preloads, opts \\ []) do
get_read_repo().preload(struct_or_structs_or_nil, preloads, opts)
end
def load(schema_or_types, data), do: get_read_repo().load(schema_or_types, data)
end
# Helper methods
defp get_read_repo() when is_atom(@algorithm), do: @algorithm.get_repo(@read_repos)
end
end
end
|
lib/ecto_facade/repo.ex
| 0.632049
| 0.484197
|
repo.ex
|
starcoder
|
defmodule Exop.Utils do
@moduledoc """
A bunch of common functions.
"""
@no_value :exop_no_value
alias Exop.ValidationChecks
@doc "Tries to make a map from a struct and keyword list"
@spec try_map(any()) :: map() | nil
def try_map(%_{} = struct), do: Map.from_struct(struct)
def try_map(%{} = map), do: map
def try_map([x | _] = keyword) when is_tuple(x), do: Enum.into(keyword, %{})
def try_map([] = list) when length(list) == 0, do: %{}
def try_map(_), do: nil
@spec put_param_value(any(), Keyword.t() | map(), atom() | String.t()) ::
Keyword.t() | map()
def put_param_value(@no_value, collection, _item_name), do: collection
def put_param_value(value, collection, item_name) when is_map(collection) do
Map.put(collection, item_name, value)
end
@spec defined_params(list(), map()) :: map()
def defined_params(contract, received_params)
when is_list(contract) and is_map(received_params) do
Map.take(received_params, Enum.map(contract, & &1[:name]))
end
@spec resolve_from(map(), list(%{name: atom() | String.t(), opts: Keyword.t()}), map()) :: map()
def resolve_from(_received_params, [], resolved_params), do: resolved_params
def resolve_from(
received_params,
[%{name: contract_item_name, opts: contract_item_opts} | contract_tail],
resolved_params
) do
alias_name = Keyword.get(contract_item_opts, :from)
resolved_params =
if alias_name do
received_params
|> Map.get(alias_name, @no_value)
|> put_param_value(resolved_params, contract_item_name)
|> Map.delete(alias_name)
else
resolved_params
end
resolve_from(received_params, contract_tail, resolved_params)
end
@spec resolve_defaults(map(), list(%{name: atom() | String.t(), opts: Keyword.t()}), map()) ::
map()
def resolve_defaults(_received_params, [], resolved_params), do: resolved_params
def resolve_defaults(
received_params,
[%{name: contract_item_name, opts: contract_item_opts} | contract_tail],
resolved_params
) do
resolved_params =
if Keyword.has_key?(contract_item_opts, :default) &&
!ValidationChecks.check_item_present?(received_params, contract_item_name) do
default_value = Keyword.get(contract_item_opts, :default)
default_value =
if is_function(default_value) do
default_value.(received_params)
else
default_value
end
put_param_value(default_value, resolved_params, contract_item_name)
else
resolved_params
end
resolve_defaults(received_params, contract_tail, resolved_params)
end
@spec resolve_coercions(map(), list(%{name: atom() | String.t(), opts: Keyword.t()}), map()) ::
any()
def resolve_coercions(_received_params, [], coerced_params), do: coerced_params
def resolve_coercions(
received_params,
[%{name: contract_item_name, opts: contract_item_opts} | contract_tail],
coerced_params
) do
if ValidationChecks.check_item_present?(received_params, contract_item_name) do
inner = fetch_inner_checks(contract_item_opts)
coerced_params =
if is_map(inner) do
inner_params = Map.get(received_params, contract_item_name)
coerced_inners =
Enum.reduce(inner, %{}, fn {contract_item_name, contract_item_opts}, acc ->
coerced_value =
resolve_coercions(
inner_params,
[%{name: contract_item_name, opts: contract_item_opts}],
inner_params
)
if is_map(coerced_value) do
to_put = Map.get(coerced_value, contract_item_name, @no_value)
if to_put == @no_value, do: acc, else: Map.put_new(acc, contract_item_name, to_put)
else
coerced_value
end
end)
if is_map(coerced_inners) do
received_params[contract_item_name]
|> Map.merge(coerced_inners)
|> put_param_value(received_params, contract_item_name)
else
put_param_value(coerced_inners, received_params, contract_item_name)
end
else
if Keyword.has_key?(contract_item_opts, :coerce_with) do
coerce_func = Keyword.get(contract_item_opts, :coerce_with)
check_item = ValidationChecks.get_check_item(coerced_params, contract_item_name)
coerced_value = coerce_func.({contract_item_name, check_item}, received_params)
put_param_value(coerced_value, coerced_params, contract_item_name)
else
coerced_params
end
end
resolve_coercions(coerced_params, contract_tail, coerced_params)
else
resolve_coercions(coerced_params, contract_tail, coerced_params)
end
end
@spec fetch_inner_checks(list()) :: map() | nil
def fetch_inner_checks([%{} = inner]), do: inner
def fetch_inner_checks(contract_item_opts) when is_list(contract_item_opts) do
Keyword.get(contract_item_opts, :inner)
end
def fetch_inner_checks(_), do: nil
end
|
lib/exop/utils.ex
| 0.732305
| 0.411052
|
utils.ex
|
starcoder
|
defmodule Sanbase.Mock do
import Mock
@doc ~s"""
Return a function of the specified arity that on its N-th call returns the
result of executing the length(list) % N
"""
def wrap_consecutives(list, opts) do
arity = Keyword.fetch!(opts, :arity)
cycle? = Keyword.get(opts, :cycle?, true)
do_wrap_consecutives(list, arity, cycle?)
end
for arity <- 0..16 do
@arity arity
defp do_wrap_consecutives(list, unquote(arity), cycle?) do
ets_table = Sanbase.TestSetupService.get_ets_table_name()
key = {:wrap_consecutive_key, :rand.uniform(1_000_000_000)}
list_length = list |> length
fn unquote_splicing(Macro.generate_arguments(@arity, __MODULE__)) ->
# Start from -1 as the returned position is the one after the applied counter
# so the first fetched position is 0
position = :ets.update_counter(ets_table, key, {2, 1}, {key, -1})
fun =
case cycle? do
true ->
list |> Stream.cycle() |> Enum.at(position)
false ->
if(position >= list_length) do
raise(
"Mocked function with wrap_consecutive is called more than #{list_length} times with `cycle?: false`"
)
else
list |> Enum.at(position)
end
end
fun.()
end
end
end
def init(), do: MapSet.new()
def prepare_mock(state \\ MapSet.new(), module, fun_name, fun_body, opts \\ [])
def prepare_mock(state, module, fun_name, fun_body, opts)
when is_atom(module) and is_atom(fun_name) and is_function(fun_body) do
passthrough = if Keyword.get(opts, :passthrough, true), do: [:passthrough], else: []
MapSet.put(state, {module, passthrough, [{fun_name, fun_body}]})
end
def prepare_mock2(state \\ MapSet.new(), captured_fun, data, opts \\ [])
for arity <- 0..16 do
@arity arity
def prepare_mock2(state, captured_fun, data, opts)
when is_function(captured_fun, unquote(arity)) do
{:name, name} = Function.info(captured_fun, :name)
{:module, module} = Function.info(captured_fun, :module)
passthrough = if Keyword.get(opts, :passthrough, true), do: [:passthrough], else: []
fun = fn unquote_splicing(Macro.generate_arguments(@arity, __MODULE__)) ->
data
end
MapSet.put(state, {module, passthrough, [{name, fun}]})
end
end
def prepare_mocks2(state \\ MapSet.new(), list, opts \\ [])
def prepare_mocks2(state, list, opts) when is_list(list) do
Enum.reduce(list, state, fn {captured_fun, data}, acc ->
prepare_mock2(acc, captured_fun, data, opts)
end)
end
def run_with_mocks(state, assert_fun) do
state
|> Enum.to_list()
|> Enum.group_by(fn {module, opts, [{fun, _}]} -> {module, fun, opts} end)
|> Enum.map(fn {{module, fun, opts}, list} ->
fun_mocks =
Enum.map(list, fn {_, _, [{_, body}]} ->
{fun, body}
end)
{module, opts, fun_mocks}
end)
|> with_mocks(do: assert_fun.())
end
end
|
test/support/mock.ex
| 0.665519
| 0.441673
|
mock.ex
|
starcoder
|
defmodule State.Alert.InformedEntityActivity do
@moduledoc """
A flattended cache of the current alert activities as matchspecs can't be used to find if an element of a list matches
a value as that's not a guard expressable pattern.
"""
@table __MODULE__
@doc """
If no activities are specified to `filter/2`, the agency's default set of `t:Model.Alert.activity/0` are used.
"""
@spec default_activities :: [Model.Alert.activity(), ...]
def default_activities, do: ~w(BOARD EXIT RIDE)
def new(table \\ @table) do
^table = :ets.new(table, [:named_table, :duplicate_bag, {:read_concurrency, true}])
:ok
end
@doc """
Filters `t:Model.Alert.id/0`s to only those that have at least one `t:Model.Alert.t/0` `informed_entity` `activities`
element matching an element of `activities`
## Special values
* If `activities` is empty, then `default_activities/0` is used as the default value.
* If `activities` contains `"ALL"`, then no filtering occurs and all `alert_ids` are returned
"""
@spec filter(atom, Enum.t(), Enum.t()) :: [Model.Alert.id()]
def filter(table \\ @table, alert_ids, activities) do
cond do
# skip cost of checking `activities`
Enum.empty?(alert_ids) ->
alert_ids
Enum.empty?(activities) ->
filter(table, alert_ids, default_activities())
# ALL wins over any specific activity
"ALL" in activities ->
alert_ids
true ->
for alert_id <- alert_ids, alert_id_has_activity?(table, activities, alert_id) do
alert_id
end
end
end
def update(table \\ @table, alerts) do
true = :ets.delete_all_objects(table)
true = :ets.insert(table, alerts_to_tuples(alerts))
:ok
end
defp alert_id_has_activity?(table, activities, alert_id) do
Enum.any?(activities, &:ets.member(table, {&1, alert_id}))
end
defp alerts_to_tuples(alerts) do
for %Model.Alert{id: alert_id, informed_entity: entities} <- alerts,
entity <- entities,
activity <- Map.get(entity, :activities, []) do
key = {activity, alert_id}
{key}
end
end
end
|
apps/state/lib/state/alert/informed_entity_activity.ex
| 0.796688
| 0.536556
|
informed_entity_activity.ex
|
starcoder
|
defmodule Checksum.Crc do
@moduledoc """
CRC computation functions
"""
use Bitwise
import Checksum.Helpers
alias Checksum.Crc, as: Crc
defstruct [:width, :poly, :table, :init, :xor_out, :ref_in, :ref_out, :bits_mask, :top_bit]
@doc """
Initializes a `Crc` struct and compute a Crc table
Args:
* `width` - This is the width of the algorithm expressed in bits. This is one less than the width of the Poly.
* `poly` - Value of the poly (polynomial)
* `init` -
* `xor_out` -
* `ref_in` -
* `ref_out` -
"""
def init(width, poly, init, xor_out, ref_in, ref_out) do
%Crc{width: width, poly: poly, init: init, xor_out: xor_out, ref_in: ref_in, ref_out: ref_out}
|> init_bits_mask
|> init_top_bit
|> init_crc_table
end
def init(:crc_8), do: init(8, 0x07, 0x00, 0x00, false, false)
def init(:crc_16), do: init(16, 0x8005, 0x000, 0x000, true, true)
def init(:arc), do: init(:crc_16)
defp init_bits_mask(%Crc{width: width} = crc_params), do: %Crc{crc_params | bits_mask: bits_mask(width)}
defp init_top_bit(%Crc{width: width} = crc_params), do: %Crc{crc_params | top_bit: top_bit(width)}
defp init_crc_table(%Crc{} = crc_params) do
table = 0..255
|> Enum.map(&calc_crc_table_cell(crc_params, &1))
%Crc{crc_params | table: table}
end
defp calc_crc_table_cell(%Crc{width: width, poly: poly, ref_in: ref_in, top_bit: top_bit, bits_mask: bits_mask}, dividend) do
dividend
|> reflect(8, ref_in) # Reflect 8 bits if needed
|> bsl(width-8) #
|> bitwise_calc(0, top_bit, poly)
|> reflect(width, ref_in)
|> band(bits_mask)
end
defp bitwise_calc(remainder, 8, _top_bit, _poly), do: remainder
defp bitwise_calc(remainder, bit, top_bit, poly) do
case remainder &&& top_bit do
0 -> remainder <<< 1
_ -> (remainder <<< 1) ^^^ poly
end |> bitwise_calc(bit+1, top_bit, poly)
end
def calc(%Crc{init: init, ref_in: ref_in, width: width} = params, data), do: calc(params, reflect(init, width, ref_in), data)
defp calc(%Crc{table: table, ref_in: ref_in, width: width} = params, last_crc, <<h, t :: binary>>) do
{crc, index} = case ref_in do
true -> {last_crc >>> 8, last_crc ^^^ h}
false -> {last_crc <<< 8, (last_crc >>> (width-8)) ^^^ h}
end
new_crc = crc ^^^ Enum.at(table, index &&& 0xff)
calc(params, new_crc, t)
end
defp calc(%Crc{width: width, ref_in: ref_in, ref_out: ref_out, xor_out: xor_out, bits_mask: bits_mask}, crc, <<>>) do
(reflect(crc, width, ref_in !== ref_out) ^^^ xor_out) &&& bits_mask
end
end
|
lib/crc.ex
| 0.844168
| 0.511046
|
crc.ex
|
starcoder
|
defmodule Ecto.Migrator do
@moduledoc """
This module provides the migration API.
## Example
defmodule MyApp.MigrationExample do
use Ecto.Migration
def up do
execute "CREATE TABLE users(id serial PRIMARY_KEY, username text)"
end
def down do
execute "DROP TABLE users"
end
end
Ecto.Migrator.up(Repo, 20080906120000, MyApp.MigrationExample)
"""
require Logger
alias Ecto.Migration.Runner
alias Ecto.Migration.SchemaMigration
@doc """
Gets all migrated versions.
This function ensures the migration table exists
if no table has been defined yet.
## Options
* `:log` - the level to use for logging. Defaults to `:info`.
Can be any of `Logger.level/0` values or `false`.
* `:prefix` - the prefix to run the migrations on
"""
@spec migrated_versions(Ecto.Repo.t, Keyword.t) :: [integer]
def migrated_versions(repo, opts \\ []) do
verbose_schema_migration repo, "retrieve migrated versions", fn ->
SchemaMigration.ensure_schema_migrations_table!(repo, opts[:prefix])
end
lock_for_migrations repo, opts, fn versions -> versions end
end
@doc """
Runs an up migration on the given repository.
## Options
* `:log` - the level to use for logging. Defaults to `:info`.
Can be any of `Logger.level/0` values or `false`.
* `:prefix` - the prefix to run the migrations on
"""
@spec up(Ecto.Repo.t, integer, module, Keyword.t) :: :ok | :already_up | no_return
def up(repo, version, module, opts \\ []) do
verbose_schema_migration repo, "create schema migrations table", fn ->
SchemaMigration.ensure_schema_migrations_table!(repo, opts[:prefix])
end
lock_for_migrations repo, opts, fn versions ->
if version in versions do
:already_up
else
do_up(repo, version, module, opts)
end
end
end
defp do_up(repo, version, module, opts) do
run_maybe_in_transaction(repo, module, fn ->
attempt(repo, module, :forward, :up, :up, opts)
|| attempt(repo, module, :forward, :change, :up, opts)
|| {:error, Ecto.MigrationError.exception(
"#{inspect module} does not implement a `up/0` or `change/0` function")}
end)
|> case do
:ok ->
verbose_schema_migration repo, "update schema migrations", fn ->
SchemaMigration.up(repo, version, opts[:prefix])
end
:ok
error ->
error
end
end
@doc """
Runs a down migration on the given repository.
## Options
* `:log` - the level to use for logging. Defaults to `:info`.
Can be any of `Logger.level/0` values or `false`.
* `:prefix` - the prefix to run the migrations on
"""
@spec down(Ecto.Repo.t, integer, module) :: :ok | :already_down | no_return
def down(repo, version, module, opts \\ []) do
verbose_schema_migration repo, "create schema migrations table", fn ->
SchemaMigration.ensure_schema_migrations_table!(repo, opts[:prefix])
end
lock_for_migrations repo, opts, fn versions ->
if version in versions do
do_down(repo, version, module, opts)
else
:already_down
end
end
end
defp do_down(repo, version, module, opts) do
run_maybe_in_transaction(repo, module, fn ->
attempt(repo, module, :forward, :down, :down, opts)
|| attempt(repo, module, :backward, :change, :down, opts)
|| {:error, Ecto.MigrationError.exception(
"#{inspect module} does not implement a `down/0` or `change/0` function")}
end)
|> case do
:ok ->
verbose_schema_migration repo, "update schema migrations", fn ->
SchemaMigration.down(repo, version, opts[:prefix])
end
:ok
error ->
error
end
end
defp run_maybe_in_transaction(repo, module, fun) do
Task.async(fn ->
do_run_maybe_in_transaction(repo, module, fun)
end)
|> Task.await(:infinity)
end
defp do_run_maybe_in_transaction(repo, module, fun) do
cond do
module.__migration__[:disable_ddl_transaction] ->
fun.()
repo.__adapter__.supports_ddl_transaction? ->
{:ok, result} = repo.transaction(fun, [log: false, timeout: :infinity])
result
true ->
fun.()
end
catch kind, reason ->
{kind, reason, System.stacktrace}
end
defp attempt(repo, module, direction, operation, reference, opts) do
if Code.ensure_loaded?(module) and
function_exported?(module, operation, 0) do
Runner.run(repo, module, direction, operation, reference, opts)
:ok
end
end
@doc """
Apply migrations to a repository with a given strategy.
The second argument identifies where the migrations are sourced from. A file
path may be passed, in which case the migrations will be loaded from this
during the migration process. The other option is to pass a list of tuples
that identify the version number and migration modules to be run, for example:
Ecto.Migrator.run(Repo, [{0, MyApp.Migration1}, {1, MyApp.Migration2}, ...], :up, opts)
A strategy must be given as an option.
## Options
* `:all` - runs all available if `true`
* `:step` - runs the specific number of migrations
* `:to` - runs all until the supplied version is reached
* `:log` - the level to use for logging. Defaults to `:info`.
Can be any of `Logger.level/0` values or `false`.
* `:prefix` - the prefix to run the migrations on
"""
@spec run(Ecto.Repo.t, binary | [{integer, module}], atom, Keyword.t) :: [integer]
def run(repo, migration_source, direction, opts) do
verbose_schema_migration repo, "create schema migrations table", fn ->
SchemaMigration.ensure_schema_migrations_table!(repo, opts[:prefix])
end
lock_for_migrations repo, opts, fn versions ->
cond do
opts[:all] ->
run_all(repo, versions, migration_source, direction, opts)
to = opts[:to] ->
run_to(repo, versions, migration_source, direction, to, opts)
step = opts[:step] ->
run_step(repo, versions, migration_source, direction, step, opts)
true ->
{:error, ArgumentError.exception("expected one of :all, :to, or :step strategies")}
end
end
end
@doc """
Returns an array of tuples as the migration status of the given repo,
without actually running any migrations.
"""
def migrations(repo, directory) do
repo
|> migrated_versions
|> collect_migrations(directory)
|> Enum.sort_by(fn {_, version, _} -> version end)
end
defp lock_for_migrations(repo, opts, fun) do
query = SchemaMigration.versions(repo, opts[:prefix])
case repo.__adapter__.lock_for_migrations(repo, query, opts, fun) do
{kind, reason, stacktrace} ->
:erlang.raise(kind, reason, stacktrace)
{:error, error} ->
raise error
result ->
result
end
end
defp run_to(repo, versions, migration_source, direction, target, opts) do
within_target_version? = fn
{version, _, _}, target, :up ->
version <= target
{version, _, _}, target, :down ->
version >= target
end
pending_in_direction(versions, migration_source, direction)
|> Enum.take_while(&(within_target_version?.(&1, target, direction)))
|> migrate(direction, repo, opts)
end
defp run_step(repo, versions, migration_source, direction, count, opts) do
pending_in_direction(versions, migration_source, direction)
|> Enum.take(count)
|> migrate(direction, repo, opts)
end
defp run_all(repo, versions, migration_source, direction, opts) do
pending_in_direction(versions, migration_source, direction)
|> migrate(direction, repo, opts)
end
defp pending_in_direction(versions, migration_source, :up) do
migrations_for(migration_source)
|> Enum.filter(fn {version, _name, _file} -> not (version in versions) end)
end
defp pending_in_direction(versions, migration_source, :down) do
migrations_for(migration_source)
|> Enum.filter(fn {version, _name, _file} -> version in versions end)
|> Enum.reverse
end
defp collect_migrations(versions, migration_source) do
ups_with_file =
versions
|> pending_in_direction(migration_source, :down)
|> Enum.map(fn {version, name, _} -> {:up, version, name} end)
ups_without_file =
versions
|> versions_without_file(migration_source)
|> Enum.map(fn version -> {:up, version, "** FILE NOT FOUND **"} end)
downs =
versions
|> pending_in_direction(migration_source, :up)
|> Enum.map(fn {version, name, _} -> {:down, version, name} end)
ups_with_file ++ ups_without_file ++ downs
end
defp versions_without_file(versions, migration_source) do
versions_with_file =
migration_source
|> migrations_for
|> Enum.map(&elem(&1, 0))
versions -- versions_with_file
end
# This function will match directories passed into `Migrator.run`.
defp migrations_for(migration_source) when is_binary(migration_source) do
query = Path.join(migration_source, "*")
for entry <- Path.wildcard(query),
info = extract_migration_info(entry),
do: info
end
# This function will match specific version/modules passed into `Migrator.run`.
defp migrations_for(migration_source) when is_list(migration_source) do
Enum.map migration_source, fn({version, module}) -> {version, module, :existing_module} end
end
defp extract_migration_info(file) do
base = Path.basename(file)
ext = Path.extname(base)
case Integer.parse(Path.rootname(base)) do
{integer, "_" <> name} when ext == ".exs" ->
{integer, name, file}
_ ->
nil
end
end
defp migrate([], direction, _repo, opts) do
level = Keyword.get(opts, :log, :info)
log(level, "Already #{direction}")
[]
end
defp migrate(migrations, direction, repo, opts) do
with :ok <- ensure_no_duplication(migrations),
versions when is_list(versions) <- do_migrate(migrations, direction, repo, opts),
do: Enum.reverse(versions)
end
defp do_migrate(migrations, direction, repo, opts) do
Enum.reduce_while migrations, [], fn {version, name_or_mod, file}, versions ->
with {:ok, mod} <- extract_module(file, name_or_mod),
:ok <- do_direction(direction, repo, version, mod, opts) do
{:cont, [version | versions]}
else
error ->
{:halt, error}
end
end
end
defp do_direction(:up, repo, version, mod, opts) do
do_up(repo, version, mod, opts)
end
defp do_direction(:down, repo, version, mod, opts) do
do_down(repo, version, mod, opts)
end
defp ensure_no_duplication([{version, name, _} | t]) do
cond do
List.keyfind(t, version, 0) ->
{:error, Ecto.MigrationError.exception(
"migrations can't be executed, migration version #{version} is duplicated")}
List.keyfind(t, name, 1) ->
{:error, Ecto.MigrationError.exception(
"migrations can't be executed, migration name #{name} is duplicated")}
true ->
ensure_no_duplication(t)
end
end
defp ensure_no_duplication([]), do: :ok
defp is_migration_module?({mod, _bin}), do: function_exported?(mod, :__migration__, 0)
defp is_migration_module?(mod), do: function_exported?(mod, :__migration__, 0)
defp extract_module(:existing_module, mod) do
if is_migration_module?(mod) do
{:ok, mod}
else
{:error, Ecto.MigrationError.exception(
"module #{inspect mod} is not an Ecto.Migration")}
end
end
defp extract_module(file, _name) do
modules = Code.load_file(file)
case Enum.find(modules, &is_migration_module?/1) do
{mod, _bin} -> {:ok, mod}
_otherwise -> {:error, Ecto.MigrationError.exception(
"file #{Path.relative_to_cwd(file)} is not an Ecto.Migration")}
end
end
defp verbose_schema_migration(repo, reason, fun) do
try do
fun.()
rescue
error ->
Logger.error """
Could not #{reason}. This error usually happens due to the following:
* The database does not exist
* The "schema_migrations" table, which Ecto uses for managing
migrations, was defined by another library
To fix the first issue, run "mix ecto.create".
To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create". Alternatively you may configure Ecto to use
another table for managing migrations:
config #{inspect repo.config[:otp_app]}, #{inspect repo},
migration_source: "some_other_table_for_schema_migrations"
The full error report is shown below.
"""
reraise error, System.stacktrace
end
end
defp log(false, _msg), do: :ok
defp log(level, msg), do: Logger.log(level, msg)
end
|
lib/ecto/migrator.ex
| 0.898252
| 0.465387
|
migrator.ex
|
starcoder
|
defmodule Phoenix.Component do
@moduledoc ~S'''
API for function components.
A function component is any function that receives
an assigns map as argument and returns a rendered
struct built with [the `~H` sigil](`LiveElement.Helpers.sigil_H/2`).
Here is an example:
defmodule MyComponent do
use Phoenix.Component
# Optionally also bring the HTML helpers
# use Phoenix.HTML
def greet(assigns) do
~H"""
<p>Hello, <%= assigns.name %></p>
"""
end
end
The component can be invoked as a regular function:
MyComponent.greet(%{name: "Jane"})
But it is typically invoked using the function component
syntax from the `~H` sigil:
~H"""
<MyComponent.greet name="Jane" />
"""
If the `MyComponent` module is imported or if the function
is defined locally, you can skip the module name:
~H"""
<.greet name="Jane" />
"""
Similar to any HTML tag inside the `~H` sigil, you can
interpolate attributes values too:
~H"""
<.greet name={@user.name} />
"""
You can learn more about the `~H` sigil [in its documentation](`LiveElement.Helpers.sigil_H/2`).
## `use Phoenix.Component`
Modules that define function components should call
`use Phoenix.Component` at the top. Doing so will import
the functions from both `LiveElement` and
`LiveElement.Helpers` modules. `LiveElement`
and `Phoenix.LiveComponent` automatically invoke
`use Phoenix.Component` for you.
You must avoid defining a module for each component. Instead,
we should use modules to group side-by-side related function
components.
## Assigns
While inside a function component, you must use `LiveElement.assign/3`
and `LiveElement.assign_new/3` to manipulate assigns,
so that LiveView can track changes to the assigns values.
For example, let's imagine a component that receives the first
name and last name and must compute the name assign. One option
would be:
def show_name(assigns) do
assigns = assign(assigns, :name, assigns.first_name <> assigns.last_name)
~H"""
<p>Your name is: <%= @name %></p>
"""
end
However, when possible, it may be cleaner to break the logic over function
calls instead of precomputed assigns:
def show_name(assigns) do
~H"""
<p>Your name is: <%= full_name(@first_name, @last_name) %></p>
"""
end
defp full_name(first_name, last_name), do: first_name <> last_name
Another example is making an assign optional by providing
a default value:
def field_label(assigns) do
assigns = assign_new(assigns, :help, fn -> nil end)
~H"""
<label>
<%= @text %>
<%= if @help do %>
<span class="help"><%= @help %></span>
<% end %>
</label>
"""
end
## Slots
Slots is a mechanism to give HTML blocks to function components
as in regular HTML tags.
### Default slots
Any content you pass inside a component is assigned to a default slot
called `@inner_block`. For example, imagine you want to create a button
component like this:
<.button>
This renders <strong>inside</strong> the button!
</.button>
It is quite simple to do so. Simply define your component and call
`render_slot(@inner_block)` where you want to inject the content:
def button(assigns) do
~H"""
<button class="btn">
<%= render_slot(@inner_block) %>
</button>
"""
end
In a nutshell, the contents given to the component is assigned to
the `@inner_block` assign and then we use `LiveElement.Helpers.render_slot/2`
to render it.
You can even have the component give a value back to the caller,
by using `let`. Imagine this component:
def unordered_list(assigns) do
~H"""
<ul>
<%= for entry <- @entries do %>
<li><%= render_block(@inner_block, entry) %></li>
<% end %>
</ul>
"""
end
And now you can invoke it as:
<.unordered_list let={entry} entries={~w(apple banana cherry)}>
I like <%= entry %>
</.unordered_list>
You can also pattern match the arguments provided to the render block. Let's
make our `unordered_list` component fancier:
def unordered_list(assigns) do
~H"\""
<ul>
<%= for entry <- @entries do %>
<li><%= render_block(@inner_block, %{entry: entry, gif_url: random_gif()} %></li>
<% end %>
</ul>
"\""
end
And now we can invoke it like this:
<.unordered_list let={%{entry: entry, gif_url: url}}>
I like <%= entry %>. <img src={url} />
</.unordered_list>
### Named slots
Besides `@inner_block`, it is also possible to pass named slots
to the component. For example, imagine that you want to create
a modal component. The modal component has a header, a footer,
and the body of the modal, which we would use like this:
<.modal>
<:header>
This is the top of the modal.
</:header>
This is the body - everything not in a
named slot goes to @inner_block.
<:footer>
<button>Save</button>
</:footer>
</.modal>
The component itself could be implemented like this:
def modal(assigns) do
~H"""
<div class="modal">
<div class="modal-header">
<%= render_slot(@header) %>
</div>
<div class="modal-body">
<%= render_slot(@inner_block) %>
</div>
<div class="modal-footer">
<%= render_slot(@footer) %>
</div>
</div>
"""
end
If you want to make the `@header` and `@footer` optional,
you can assign them a default of an empty list at the top:
def modal(assigns) do
assigns =
assigns
|> assign_new(:header, fn -> [] end)
|> assign_new(:footer, fn -> [] end)
~H"""
<div class="modal">
...
end
### Named slots with attributes
It is also possible to pass the same named slot multiple
times and also give attributes to each of them.
If multiple slot entries are defined for the same slot,
`render_slot/2` will automatically render all entries,
merging their contents. But sometimes we want more fine
grained control over each individual slot, including access
to their attributes. Let's see an example. Imagine we want
to implement a table component
For example, imagine a table component:
<.table rows={@users}>
<:col let={user} label="Name">
<%= user.name %>
</:col>
<:col let={user} label="Address">
<%= user.address %>
</:col>
</.table>
At the top level, we pass the rows as an assign and we define
a `:col` slot for each column we want in the table. Each
column also has a `label`, which we are going to use in the
table header.
Inside the component, you can render the table with headers,
rows, and columns:
def table(assigns) do
~H"""
<table>
<th>
<%= for col <- @col do %>
<td><%= col.label %></td>
<% end >
</th>
<%= for row <- @rows do %>
<tr>
<%= for col <- @col do %>
<td><%= render_slot(col, row) %></td>
<% end %>
</tr>
<% end %>
</table>
"""
end
Each named slot (including the `@inner_block`) is a list of maps,
where the map contains all slot attributes, allowing us to access
the label as `col.label`. This gives us complete control over how
we render them.
'''
@doc false
defmacro __using__(_) do
quote do
import LiveElement
import LiveElement.Helpers
end
end
end
|
lib/phoenix_component.ex
| 0.862134
| 0.639652
|
phoenix_component.ex
|
starcoder
|
defmodule Nerves.Network do
require Logger
alias Nerves.Network.Types
@moduledoc """
The Nerves.Network application handles the low level details of connecting
to networks. To quickly get started, create a new Nerves project and add
the following line someplace early on in your program:
Nerves.Network.setup "wlan0", ssid: "myssid", key_mgmt: :"WPA-PSK", psk: "secretsecret"
When you boot your Nerves image, Nerves.Network monitors for an interface
called "wlan0" to be created. This occurs when you plug in a USB WiFi dongle.
If you plug in more than one WiFi dongle, each one will be given a name like
"wlan1", etc. Those may be setup as well.
When not connected, Nerves.Network continually scans
for the desired access point. Once found, it associates and runs DHCP to
acquire an IP address.
"""
@typedoc "Settings to `setup/2`"
@type setup_setting ::
{:ipv4_address_method, :dhcp | :static | :linklocal} |
{:ipv4_address, Types.ip_address} |
{:ipv4_subnet_mask, Types.ip_address} |
{:domain, String.t} |
{:search, String.t} |
{:static_domains, list(String.t)} |
{:nameservers, [Types.ip_address]} |
{:ipv6_dhcp, :stateful | :stateless} |
{:ipv6_nameservers, [Types.ip_address]} |
{:ssid, String.t} |
{:key_mgmt, :"WPA-PSK" | :NONE} |
{:psk, String.t}
@typedoc "Keyword List settings to `setup/2`"
@type setup_settings :: [setup_setting]
@doc """
Configure the specified interface. Settings contains one or more of the
following:
* `:ipv4_address_method` - `:dhcp`, `:static`, or `:linklocal`
* `:ipv4_address` - e.g., "192.168.1.5" (specify when :ipv4_address_method = :static)
* `:ipv4_subnet_mask` - e.g., "255.255.255.0" (specify when :ipv4_address_method = :static)
* `:domain` - e.g., "mycompany.com" (specify when :ipv4_address_method = :static)
* `:nameservers` - e.g., ["8.8.8.8", "8.8.4.4"] (specify when :ipv4_address_method = :static)
* `:ssid` - "My WiFi AP" (specify if this is a wireless interface)
* `:key_mgmt` - e.g., `:"WPA-PSK"` or `:NONE`
* `:psk` - e.g., "my-secret-wlan-key"
See `t(#{__MODULE__}.setup_setting)` for more info.
"""
@spec setup(Types.ifname, setup_settings) :: :ok
def setup(ifname, settings \\ []) do
Logger.debug("(ifname = #{ifname}, settings = #{inspect settings})")
{:ok, {_new, _old}} = Nerves.Network.Config.put ifname, settings
:ok
end
@doc """
Stop all control of `ifname`
"""
@spec teardown(Types.ifname) :: :ok
def teardown(ifname) do
Logger.debug "#{__MODULE__} teardown(#{ifname})"
{:ok, {_new, _old}} = Nerves.Network.Config.drop ifname
:ok
end
@doc """
Stops selected controls (aka managers) of `ifname`. The controls are being passed in a form of the keyword list.
Returns `{:ok, list()}`.
## Parameters
- ifname: String identifying network interface's name i.e. "eth0"
- settings: a Keyword list with the settings i.e. [ipv4_address_method: :dhcp, ipv6_dhcp: stateless]. For stop function the key values are
irelevant because the settings are used for only locating an appropriate manager tied to the network interface specified with ifname.
## Examples
iex> Nerves.Network.teardown("eth0", [ipv6_dhcp: :stateless])
{:ok, [ok: :ok]}
iex> Nerves.Network.teardown("eth0", [ipv4_address_method: :dhcp])
{:ok, [ok: :ok]}
iex> Nerves.Network.teardown("non_existent", [ipv6_dhcp: :stateless])
[{{:error, :not_found}, {:error, :not_found}}]
"""
@spec teardown(Types.ifname, list()) :: {:ok, list(Nerves.Network.IFSupervisor.child_termination_t())}
defdelegate teardown(ifname, settings), to: Nerves.Network.IFSupervisor, as: :stop
@doc """
Convenience function for returning the current status of a network interface
from SystemRegistry.
"""
@spec status(Types.ifname) :: Nerves.NetworkInterface.Worker.status | nil
def status(ifname) do
SystemRegistry.match(:_)
|> get_in([:state, :network_interface, ifname])
end
@doc """
If `ifname` is a wireless LAN, scan for access points.
"""
@spec scan(Types.ifname) :: [String.t] | {:error, any}
def scan(ifname) do
Nerves.Network.IFSupervisor.scan ifname
end
@doc """
Change the regulatory domain for wireless operations. This must be set to the
two character `alpha2` code for the country where this device is operating.
See [the kernel database](http://git.kernel.org/cgit/linux/kernel/git/sforshee/wireless-regdb.git/tree/db.txt)
for the latest database and the frequencies allowed per country.
The default is to use the world regulatory domain (00).
You may also configure the regulatory domain in your app's `config/config.exs`:
config :nerves_network,
regulatory_domain: "US"
"""
@spec set_regulatory_domain(String.t) :: :ok
def set_regulatory_domain(country) do
Logger.warn "Regulatory domain currently can only be updated on WiFi device addition."
Application.put_env(:nerves_network, :regulatory_domain, country)
end
end
|
lib/nerves_network.ex
| 0.817793
| 0.445771
|
nerves_network.ex
|
starcoder
|
defmodule Pummpcomm.Session.Exchange.ReadBgTargets do
@moduledoc """
Reads blood glucose targets for throughout the day.
"""
alias Pummpcomm.BloodGlucose
alias Pummpcomm.Session.{Command, Response}
# Constants
@mgdl 1
@mmol 2
@opcode 0x9F
@targets_max_count 8
# Functions
@doc """
Makes `Pummpcomm.Session.Command.t` to read the target low and high blood glucose throughout the day.
"""
@spec make(Command.pump_serial()) :: Command.t()
def make(pump_serial) do
%Command{opcode: @opcode, pump_serial: pump_serial}
end
@doc """
Decodes `Pummpcomm.Session.Response.t` to `units` the blood glucose targets are in and the high and low target for
each open interval starting at `start`.
"""
@spec decode(Response.t()) :: {
:ok,
%{
targets: [
%{
bg_high: BloodGlucose.blood_glucose(),
bg_low: BloodGlucose.blood_glucose(),
start: NaiveDateTime.t()
}
],
units: String.t()
}
}
def decode(%Response{opcode: @opcode, data: <<units::8, targets::binary>>}) do
{:ok,
%{
units: decode_units(units),
targets: decode_targets(units, targets, [], @targets_max_count)
}}
end
## Private Functions
defp basal_time(raw_time) do
Timex.now()
|> Timex.beginning_of_day()
|> Timex.shift(minutes: 30 * raw_time)
|> DateTime.to_time()
end
defp decode_bg(bg, @mgdl), do: bg
defp decode_bg(bg, @mmol), do: bg / 10
defp decode_targets(_, _, decoded_targets, 0), do: Enum.reverse(decoded_targets)
defp decode_targets(_, <<0::8, _::binary>>, decoded_targets, _) when length(decoded_targets) > 0,
do: Enum.reverse(decoded_targets)
defp decode_targets(
units,
<<raw_start_time::8, bg_low::8, bg_high::8, rest::binary>>,
decoded_targets,
count
) do
target = %{
start: basal_time(raw_start_time),
bg_low: decode_bg(bg_low, units),
bg_high: decode_bg(bg_high, units)
}
decode_targets(units, rest, [target | decoded_targets], count - 1)
end
defp decode_units(@mgdl), do: "mg/dL"
defp decode_units(@mmol), do: "mmol/L"
end
|
lib/pummpcomm/session/exchange/read_bg_targets.ex
| 0.829285
| 0.495484
|
read_bg_targets.ex
|
starcoder
|
defmodule Norm.Core.Spec do
@moduledoc false
# Provides a struct to encapsulate specs
alias __MODULE__
alias Norm.Core.Spec.{
And,
Or
}
defstruct predicate: nil, generator: nil, f: nil
def build({:or, _, [left, right]}) do
l = build(left)
r = build(right)
quote do
%Or{left: unquote(l), right: unquote(r)}
end
end
def build({:and, _, [left, right]}) do
l = build(left)
r = build(right)
quote do
And.new(unquote(l), unquote(r))
end
end
# Anonymous functions
def build(quoted = {f, _, _args}) when f in [:&, :fn] do
predicate = Macro.to_string(quoted)
quote do
run = fn input ->
input |> unquote(quoted).()
end
%Spec{generator: nil, predicate: unquote(predicate), f: run}
end
end
# Standard functions
def build(quoted = {a, _, args}) when is_atom(a) and is_list(args) do
predicate = Macro.to_string(quoted)
quote do
run = fn input ->
input |> unquote(quoted)
end
%Spec{predicate: unquote(predicate), f: run, generator: unquote(a)}
end
end
# Function without parens
def build(quoted = {a, _, _}) when is_atom(a) do
predicate = Macro.to_string(quoted) <> "()"
quote do
run = fn input ->
input |> unquote(quoted)
end
%Spec{predicate: unquote(predicate), f: run, generator: unquote(a)}
end
end
# Remote call
def build({{:., _, _}, _, _} = quoted) do
predicate = Macro.to_string(quoted)
quote do
run = fn input ->
input |> unquote(quoted)
end
%Spec{predicate: unquote(predicate), f: run, generator: :none}
end
end
def build(quoted) do
spec = Macro.to_string(quoted)
raise ArgumentError, "Norm can't build a spec from: #{spec}"
end
if Code.ensure_loaded?(StreamData) do
defimpl Norm.Generatable do
def gen(%{generator: gen, predicate: pred}) do
case build_generator(gen) do
nil -> {:error, pred}
generator -> {:ok, generator}
end
end
defp build_generator(gen) do
case gen do
:is_atom -> StreamData.atom(:alphanumeric)
:is_binary -> StreamData.binary()
:is_bitstring -> StreamData.bitstring()
:is_boolean -> StreamData.boolean()
:is_float -> StreamData.float()
:is_integer -> StreamData.integer()
:is_list -> StreamData.list_of(StreamData.term())
_ -> nil
end
end
end
end
defimpl Norm.Conformer.Conformable do
def conform(%{f: f, predicate: pred}, input, path) do
case f.(input) do
true ->
{:ok, input}
false ->
{:error, [Norm.Conformer.error(path, input, pred)]}
_ ->
raise ArgumentError, "Predicates must return a boolean value"
end
end
def valid?(%{f: _f, predicate: _pred} = spec, input, path) do
{status, _} = conform(spec, input, path)
status == :ok
end
end
@doc false
def __inspect__(spec) do
spec.predicate
end
defimpl Inspect do
def inspect(spec, _) do
Inspect.Algebra.concat(["#Norm.Spec<", spec.predicate, ">"])
end
end
end
|
lib/norm/core/spec.ex
| 0.791176
| 0.625681
|
spec.ex
|
starcoder
|
defmodule Strabo.Compiler do
require Logger
require Macro
alias Strabo.Types, as: T
alias Strabo.Functions, as: F
alias Strabo.Util, as: U
defmodule Sigils do
@doc "Causes a regex to match only at the beginning of a string."
def sigil_p(string, []) do
{:ok, regex} = Regex.compile("^" <> string)
regex
end
end
defmodule Lexer do
@moduledoc """
Functions to go from raw lisp-like syntax to a list of formatted tokens.
"""
import Sigils
# Matches anything between double quotes (including the starting and
# ending quotes.
@string_matcher ~p"\"[^\"]*\""
# Matches a $ followed by letters, underscores, and/or numbers.
@param_matcher ~p"\$[A-Za-z0-9_]+"
# Matches any token containing only letters, underscores and/or numbers,
# starting with a letter or underscore.
@atom_matcher ~p"[A-Za-z_][A-Za-z0-9_]*"
# Matches any integer
@int_matcher ~p"-?\d+"
# Matches any number in float or scientific notation.
@number_matcher ~p"-?\d+(,\d+)*(\.\d+(e\d+)?)"
# Matches parentheses
@paren_matcher ~p"[\(\)]"
# Matches whitespace
@whitespace_matcher ~p"\s+"
@doc """
Splits a raw string into tokens defined by regexes above,
and formats each token with an appropriate formatting function. To
add a new type of token, add another call to
match_or_skip(<regex>, <formatting_function>).
"""
@spec tokenize(String.t) :: List
def tokenize(text) do
{token, rest} =
text
|> match_or_skip(@string_matcher, &(&1))
|> match_or_skip(@paren_matcher, &(&1))
|> match_or_skip(@param_matcher, &format_parameter/1)
|> match_or_skip(@atom_matcher, &(&1))
|> match_or_skip(@number_matcher, &U.parse_float/1)
|> match_or_skip(@int_matcher, &String.to_integer/1)
|> match_or_skip(@whitespace_matcher, fn _ -> :skip end)
case {token, rest} do
{_, ""} -> [token] # end of the string
{:no_match, _} -> raise("Unmatched token #{token} found.")
{:skip, _} -> tokenize(rest) # token was ignored
{_, _} -> [token | tokenize(rest)] # token accepted
end
end
defp format_parameter(param_string) do
param = param_string
|> String.slice(1, String.length(param_string) - 1)
|> String.to_atom
{:param, {param, [], Elixir}}
end
defp match_or_skip({token, text}, regex, formatter) do
# If a previous regex in the pipeline has already matched, then
# just return the previous result.
if token != :no_match or text == "" do
{token, text}
else
# Otherwise, try to consume the regex from the text.
case Regex.run(regex, text, capture: :first, return: :index) do
nil -> {:no_match, text}
[{0, length}] ->
{head, tail} = String.split_at(text, length)
{formatter.(head), tail}
end
end
end
defp match_or_skip(text, regex, formatter) do
match_or_skip({:no_match, text}, regex, formatter)
end
end
defmodule Parser do
@moduledoc """
Functions to go from formatted tokens to an AST.
"""
def parse(token_stream, env) do
{[], [ast], args} = parse_cell(token_stream, [], [], env)
{:fn, [], [{:->, [], [sort_and_validate_args(args), ast]}]}
end
def transform(node) do
case node do
{lambda = {:., [], [_, :lambda]}, [], [(arg_cell = {{:., [], [_, arg_name]}, [], []}), body]} ->
replace_arg_in_body =
fn sub_node ->
case sub_node do
{{:., [], [_, arg_name]}, [], []} -> {arg_name, [], Elixir}
_ -> sub_node
end
end
{:fn, [], [{:->, [], [[{arg_name, [], Elixir}], Macro.prewalk(body, replace_arg_in_body)]}]}
_ -> node
end
end
def compile(parse_result) do
parse_result_string = Macro.to_string(parse_result)
Logger.info "Parse result: #{inspect parse_result_string}"
transformed_result = Macro.prewalk(parse_result, &transform/1)
transformed_result_string = Macro.to_string(transformed_result)
Logger.info "Transformed result: #{inspect transformed_result_string}"
{f, []} = Code.eval_quoted(transformed_result)
f
end
@doc """
Sorts a list of parameters (such as [{:"1", [], Elixir}] into numeric order,
and throws an exception if the arguments do contain exactly the atoms :"1"
through :"n" for some integer n.
"""
defp sort_and_validate_args(args) do
sorted = Enum.sort(args, fn {p, _, _}, {q, _, _} ->
U.atom_to_int(p) < U.atom_to_int(q) end)
indices = for {p, _, _} <- sorted, do: U.atom_to_int(p)
case Enum.count sorted do
0 -> :ok
length -> ^indices = for i <- 1..length, do: i
end
sorted
end
@doc """
Translates a lisp-like cell such as ["add", 1, 2] to an Elixir AST (quoted value).
"""
defp cell_to_elixir_ast(cell, env) do
[f | args] = Enum.reverse cell
{{:., [], [{:__aliases__, [alias: env], []}, String.to_atom(f)]}, [], args}
end
defp parse_cell([], ast, args, _), do: {[], ast, args}
defp parse_cell([")" | tail], ast, args, env) do
{tail, cell_to_elixir_ast(ast, env), args}
end
defp parse_cell(["(" | tail], ast, args, env) do
{new_tail, cell, new_args} = parse_cell(tail, [], args, env)
parse_cell(new_tail, [cell | ast], new_args, env)
end
defp parse_cell([{:param, param} | tail], ast, args, env) do
parse_cell(tail, [param | ast], [param | args], env)
end
defp parse_cell([token | tail], ast, args, env) do
parse_cell(tail, [token | ast], args, env)
end
end
end
|
lib/strabo/compiler.ex
| 0.673621
| 0.479869
|
compiler.ex
|
starcoder
|
defmodule GasCodes do
@moduledoc """
Module containing macro definitions for gas cost
From https://github.com/ethereum/go-ethereum/blob/master/params/gas_table.go
"""
# credo:disable-for-this-file
# Nothing paid for operations of the set Wzero.
defmacro _GZERO do quote do: 0 end
#Amount of gas to pay for operations of the set Wbase.
defmacro _GBASE do quote do: 2 end
# Amount of gas to pay for operations of the set Wverylow.
defmacro _GVERYLOW do quote do: 3 end
# Amount of gas to pay for operations of the set Wlow.
defmacro _GLOW do quote do: 5 end
# Amount of gas to pay for operations of the set Wmid
defmacro _GMID do quote do: 8 end
# Amount of gas to pay for operations of the set Whigh.
defmacro _GHIGH do quote do: 10 end
# Amount of gas to pay for operations of the set Wextcode.
defmacro _GEXTCODE do quote do: 700 end
# Amount of gas to pay for operations of the set Wextcodesize.
defmacro _GEXTCODESIZE do quote do: 20 end
# Amount of gas to pay for operations of the set Wextcodecopy.
defmacro _GEXTCODECOPY do quote do: 20 end
# Amount of gas to pay for a BALANCE operation.
defmacro _GBALANCE do quote do: 20 end
# Paid for a SLOAD operation.
defmacro _GSLOAD do quote do: 50 end
# Paid for a JUMPDEST operation.
defmacro _GJUMPDEST do quote do: 1 end
# Paid for an SSTORE operation when the storage value is set to
# non-zero from zero.
defmacro _GSSET do quote do: 20000 end
# Paid for an SSTORE operation when the storage value’s zeroness
# remains unchanged or is set to zero.
defmacro _GSRESET do quote do: 5000 end
# Refund given (added into refund counter) when the storage value is
# set to zero from non-zero.
defmacro _RSCLEAR do quote do: 15000 end
# Refund given (added into refund counter) for self-destructing an
# account.
defmacro _RSELFDESTRUCT do quote do: 24000 end
# Amount of gas to pay for a SELFDESTRUCT operation
defmacro _GSELFDESTRUCT do quote do: 5000 end
# Paid for a CREATE operation.
defmacro _GCREATE do quote do: 32000 end
# Paid per byte for a CREATE operation to succeed in placing code
# into state.
defmacro _GCODEDEPOSIT do quote do: 200 end
# Paid for a CALL operation.
defmacro _GCALL do quote do: 40 end
# Paid for a non-zero value transfer as part of the CALL operation.
defmacro _GCALLVALUE do quote do: 9000 end
# A stipend for the called contract subtracted from Gcallvalue for a
# non-zero value transfer.
defmacro _GCALLSTIPEND do quote do: 2300 end
# Paid for a CALL or SELFDESTRUCT operation which creates an account.
defmacro _GNEWACCOUNT do quote do: 25000 end
# Partial payment for an EXP operation.
defmacro _GEXP do quote do: 10 end
# Partial payment when multiplied by dlog256(exponent)e for the EXP
# operation.
defmacro _GEXPBYTE do quote do: 10 end # From the go implementation. 50 from the yellopaper
# Paid for every additional word when expanding memory.
defmacro _GMEMORY do quote do: 3 end
# Paid by all contract-creating transactions after the Homestead
# transition.
defmacro _GTXCREATE do quote do: 32000 end
# Paid for every zero byte of data or code for a transaction.
defmacro _GTXDATAZERO do quote do: 4 end
# Paid for every non-zero byte of data or code for a transaction.
defmacro _GTXDATANONZERO do quote do: 68 end
# Paid for every transaction.
defmacro _GTRANSACTION do quote do: 21000 end
# Partial payment for a LOG operation.
defmacro _GLOG do quote do: 375 end
# Paid for each byte in a LOG operation’s data.
defmacro _GLOGDATA do quote do: 8 end
# Paid for each topic of a LOG operation.
defmacro _GLOGTOPIC do quote do: 375 end
# Paid for each SHA3 operation.
defmacro _GSHA3 do quote do: 30 end
# Paid for each word (rounded up) for input data to a SHA3 operation.
defmacro _GSHA3WORD do quote do: 6 end
# Partial payment for *COPY operations, multiplied by words copied,
# rounded up.
defmacro _GCOPY do quote do: 3 end
# Payment for BLOCKHASH operation.
defmacro _GBLOCKHASH do quote do: 20 end
end
|
apps/aevm/lib/gas_codes.ex
| 0.817756
| 0.512998
|
gas_codes.ex
|
starcoder
|
defmodule Docker do
@moduledoc ~S"""
Docker Client
docker engine 通过绑定 unix sockets 来对外暴露 Remote API
其本质是在 unix sockets 上进行 HTTP 协议的传输
"""
require Logger
defstruct addr: "",
req: &Docker.Request.get/2
@doc ~S"""
设置容器连接信息
## Examples
```elixir
iex > config = Docker.config("unix:///var/run/docker.sock")
iex > config = Docker.config("http://192.168.0.1:12450")
```
"""
def config(addr \\ "unix:///var/run/docker.sock") do
Map.put(%Docker{},:addr, addr)
end
@doc ~S"""
获取容器列表.
## Examples
```elixir
iex > config = Docker.config(address)
iex > Docker.containers(config)
```
"""
def containers(docker), do: docker.req.("/containers/json?all=true",docker.addr)
@doc ~S"""
获取镜像列表.
## Examples
```elixir
iex > config = Docker.config(address)
iex > Docker.images(config)
```
"""
def images(docker), do: docker.req.("/images/json",docker.addr)
@doc ~S"""
获取 Docker Info.
## Examples
```elixir
iex > config = Docker.config(address)
iex > Docker.info(config)
```
"""
def info(docker), do: docker.req.("/info",docker.addr)
@doc ~S"""
获取 Docker Swarm nodes Info.
## Examples
```elixir
iex > config = Docker.config(address)
iex > Docker.nodes(config)
```
"""
def nodes(docker), do: docker.req.("/nodes",docker.addr)
@doc ~S"""
获取 Docker 版本信息.
## Examples
```elixir
iex > config = Docker.config(address)
iex > Docker.info(config)
```
"""
def version(docker) ,do: docker.req.("/version",docker.addr)
@doc """
获取 Volumes 列表
"""
def volumes(docker) ,do: docker.req.("/volumes",docker.addr)
@doc ~S"""
添加 Docker Event 监听器.
## Examples
```elixir
defmodule Example do
def listen do
receive do
{:ok, msg } -> IO.puts msg
end
listen
end
end
config = Docker.config(address)
Docker.add_event_listener(config,spawn(Example, :listen, []))
```
"""
def add_event_listener(docker,pid \\self) do
Docker.Request.get("/events",docker.addr,pid)
end
@doc ~S"""
添加 Docker Log 监听器.
## Examples
```elixir
config = Docker.config(address)
Docker.add_event_listener(config,container_id)
```
"""
def add_log_listener(docker,id,pid\\self) do
Docker.Request.get("/containers/#{id}/logs",docker.addr,pid)
end
@doc ~S"""
创建 Docker 容器
## Examples
```elixir
config = Docker.config(address)
{:ok,resp} = Docker.create_container(config,%{"Image" => "registry:2"})
assert resp.code == 201
```
具体参数参考 (Docker Remote API)[https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/create-a-container]
"""
def create_container(docker,data) do
data_string = Poison.encode!(data)
Docker.Request.post("/containers/create",docker.addr,data_string)
end
@doc ~S"""
CN:启动容器
EN:Start a container¶
## Examples
```elixir
config = Docker.config(address)
{:ok,resp} = Docker.start(config,"containerId")
"""
def start(docker,id) do
Docker.Request.post("/containers/#{id}/start",docker.addr)
end
@doc """
CN:停止容器
EN:Stop a container¶
## Examples
```elixir
config = Docker.config(address)
{:ok,resp} = Docker.stop(config,"containerId")
"""
def stop(docker,id) do
Docker.Request.post("/containers/#{id}/stop",docker.addr)
end
@doc ~S"""
CN:获取指定容器ID的信息
EN:Inspect a container
"""
def container(docker,id) do
docker.req.("/containers/#{id}/json",docker.addr)
end
def remove_container(docker,id) do
Docker.Request.delete("/containers/#{id}",docker.addr)
end
@doc ~S"""
CN:列出容器中所有的进程信息
EN:List processes running inside a container
"""
def top(docker,id), do: docker.req.("/containers/#{id}/top",docker.addr)
@doc ~S"""
CN:列出容器中的资源状态
EN:Get container stats based on resource usage
"""
def stats(docker,id), do: docker.req.("/containers/#{id}/stats",docker.addr)
@doc ~S"""
CN:列出容器文件系统的变更
EN:Inspect changes on a container’s filesystem
"""
def changes(docker,id), do: docker.req.("/containers/#{id}/changes",docker.addr)
@doc ~S"""
CN:强制关闭容器
EN:Kill a container
"""
def kill(docker,id), do: docker.req.("/containers/#{id}/kill",docker.addr)
end
|
lib/docker.ex
| 0.531209
| 0.643693
|
docker.ex
|
starcoder
|
defmodule NewRelic.Plug.Instrumentation do
@moduledoc """
Utility methods for instrumenting parts of an Elixir app.
"""
@doc """
Instruments a database call and records the elapsed time.
* `conn` should be a `Plug.Conn` that has been configured by `NewRelic.Plug.Phoenix`.
* `action` is the name of the repository method being instrumented.
* `queryable` is the `Queryable` being passed to the repository.
By default, the query name will be infered from `queryable` and `action`. This can be overriden
by providing a `:query` option in `opts`.
"""
@spec instrument_db(atom, Ecto.Queryable.t, Keyword.t, fun) :: any
def instrument_db(action, queryable, opts, f) do
{elapsed, result} = :timer.tc(f)
opts
|> put_model(queryable)
|> put_action(action)
|> record(elapsed)
result
end
defp put_model(opts, queryable) do
case Keyword.fetch(opts, :model) do
{:ok, _} -> opts
:error ->
if model = infer_model(queryable) do
Keyword.put(opts, :model, model)
else
opts
end
end
end
defp put_action(opts, action) do
Keyword.put_new(opts, :action, action)
end
defp infer_model(%{__struct__: model_type, __meta__: %{__struct__: Ecto.Schema.Metadata}}) do
model_name(model_type)
end
# Ecto 1.1 clause
defp infer_model(%{model: model}) do
infer_model(model)
end
# Ecto 2.0 clause
defp infer_model(%{data: data}) do
infer_model(data)
end
defp infer_model(%{__struct__: Ecto.Query, from: {_, model_type}}) do
model_name(model_type)
end
defp infer_model(%{__struct__: Ecto.Query}) do
nil
end
defp infer_model({_, _, [model_type | _]}) do
model_name(model_type)
end
defp infer_model(queryable) do
infer_model(Ecto.Queryable.to_query(queryable))
end
defp model_name(model_type) do
model_type |> Module.split |> List.last
end
defp record(opts, elapsed) do
with {:ok, transaction} <- get_transaction(opts),
do: NewRelic.Transaction.record_db(transaction, get_query(opts), elapsed)
end
defp get_transaction(opts) do
with conn = %{} <- Keyword.get(opts, :conn, {:error, :missing_conn}),
transaction = %NewRelic.Transaction{} <- Map.get(conn.private, :new_relixir_transaction, {:error, :missing_transaction}) do
{:ok, transaction}
else
{:error, :missing_conn} -> get_transaction()
{:error, :missing_transaction} -> get_transaction()
end
end
defp get_transaction() do
case NewRelic.TransactionStore.get() do
nil -> nil
transaction = %NewRelic.Transaction{} -> {:ok, transaction}
end
end
defp get_query(opts) do
case Keyword.fetch(opts, :query) do
{:ok, value} ->
value
:error ->
case {Keyword.fetch(opts, :model), Keyword.fetch(opts, :action)} do
{{:ok, model}, {:ok, action}} ->
{model, action}
_ ->
"SQL"
end
end
end
end
|
lib/new_relic/plug/instrumentation.ex
| 0.821939
| 0.550003
|
instrumentation.ex
|
starcoder
|
defmodule Pov do
# Structs and types
@typedoc """
A tree, which is made of a node with several branches
"""
@type tree :: {any, [tree]}
defmodule Crumb do
defstruct [:parent, left_siblings: [], right_siblings: []]
@type t :: %Crumb{parent: any, left_siblings: [Pov.tree()], right_siblings: [Pov.tree()]}
end
defmodule Zipper do
defstruct [:focus, genealogy: []]
@type t :: %Zipper{focus: Pov.tree(), genealogy: [Crumb.t()]}
end
# Core functions
@doc """
Reparent a tree on a selected node.
"""
@spec from_pov(tree :: tree, node :: any) :: {:ok, tree} | {:error, atom}
def from_pov(tree, node) do
case tree |> zip |> search(node) do
{:ok, zipper} -> {:ok, reparent(zipper)}
_ -> {:error, :nonexistent_target}
end
end
@doc """
Finds a path between two nodes
"""
@spec path_between(tree :: tree, from :: any, to :: any) :: {:ok, [any]} | {:error, atom}
def path_between(tree, from, to) do
case tree |> zip |> search(from) do
{:ok, zipper} ->
case zipper |> reparent |> zip |> search(to) do
{:ok, zipper_path} -> {:ok, get_path(zipper_path)}
_ -> {:error, :nonexistent_destination}
end
_ ->
{:error, :nonexistent_source}
end
end
def search(%Zipper{focus: {node, _children}} = zipper, node), do: {:ok, zipper}
def search(%Zipper{} = zipper, node) do
case zipper |> down |> search(node) do
{:ok, z} -> {:ok, z}
_ -> zipper |> right |> search(node)
end
end
def search(nil, _node), do: nil
def reparent(%Zipper{focus: tree, genealogy: []}), do: tree
def reparent(%Zipper{
focus: {node, children},
genealogy: [
%Crumb{parent: parent, left_siblings: left, right_siblings: right} | grandparent
]
}) do
{node, [reparent(%Zipper{focus: {parent, left ++ right}, genealogy: grandparent}) | children]}
end
def get_path(%Zipper{focus: {node, _children}, genealogy: genealogy}) do
parents = Enum.map(genealogy, fn %Crumb{parent: parent} -> parent end)
Enum.reverse([node | parents])
end
# Zipper navigation
# up and left are not actually required for this problem
def zip(tree), do: %Zipper{focus: tree}
def down(%Zipper{focus: {value, [child | children]}, genealogy: genealogy}) do
%Zipper{
focus: child,
genealogy: [%Crumb{parent: value, right_siblings: children} | genealogy]
}
end
def down(_zipper), do: nil
def up(%Zipper{
focus: tree,
genealogy: [
%Crumb{parent: parent, left_siblings: left, right_siblings: right} | grandparents
]
}) do
%Zipper{focus: {parent, left ++ [tree | right]}, genealogy: grandparents}
end
def up(_zipper), do: nil
def left(%Zipper{
focus: tree,
genealogy: [
%Crumb{left_siblings: [left | lefties], right_siblings: right} = crumb | grandparents
]
}) do
%Zipper{
focus: left,
genealogy: [
%Crumb{crumb | left_siblings: lefties, right_siblings: [tree | right]} | grandparents
]
}
end
def left(_zipper), do: nil
def right(%Zipper{
focus: tree,
genealogy: [
%Crumb{left_siblings: left, right_siblings: [right | righties]} = crumb | grandparents
]
}) do
%Zipper{
focus: right,
genealogy: [
%Crumb{crumb | left_siblings: [tree | left], right_siblings: righties} | grandparents
]
}
end
def right(_zipper), do: nil
end
|
exercises/practice/pov/.meta/example.ex
| 0.789599
| 0.744726
|
example.ex
|
starcoder
|
defmodule MyXQL.Query do
@moduledoc """
A struct for a prepared statement that returns a single result.
For the struct returned from a query that returns multiple
results, see `MyXQL.Queries`.
Its public fields are:
* `:name` - The name of the prepared statement;
* `:num_params` - The number of parameter placeholders;
* `:statement` - The prepared statement
## Named and Unnamed Queries
Named queries are identified by the non-empty value in `:name` field
and are meant to be re-used.
Unnamed queries, with `:name` equal to `""`, are automatically closed
after being executed.
"""
@type t :: %__MODULE__{
name: iodata(),
cache: :reference | :statement,
num_params: non_neg_integer(),
statement: iodata()
}
defstruct name: "",
cache: :reference,
num_params: nil,
ref: nil,
statement: nil,
statement_id: nil
end
defmodule MyXQL.Queries do
@moduledoc """
A struct for a prepared statement that returns multiple results.
An example use case is a stored procedure with multiple `SELECT`
statements.
Its public fields are:
* `:name` - The name of the prepared statement;
* `:num_params` - The number of parameter placeholders;
* `:statement` - The prepared statement
## Named and Unnamed Queries
Named queries are identified by the non-empty value in `:name` field
and are meant to be re-used.
Unnamed queries, with `:name` equal to `""`, are automatically closed
after being executed.
"""
@type t :: %__MODULE__{
name: iodata(),
cache: :reference | :statement,
num_params: non_neg_integer(),
statement: iodata()
}
defstruct name: "",
cache: :reference,
num_params: nil,
ref: nil,
statement: nil,
statement_id: nil
end
defimpl DBConnection.Query, for: [MyXQL.Query, MyXQL.Queries] do
def parse(query, _opts) do
query
end
def describe(query, _opts) do
query
end
def encode(%{ref: nil} = query, _params, _opts) do
raise ArgumentError, "query #{inspect(query)} has not been prepared"
end
def encode(%{num_params: nil} = query, _params, _opts) do
raise ArgumentError, "query #{inspect(query)} has not been prepared"
end
def encode(%{num_params: num_params} = query, params, _opts)
when num_params != length(params) do
message =
"expected params count: #{inspect(num_params)}, got values: #{inspect(params)}" <>
" for query: #{inspect(query)}"
raise ArgumentError, message
end
def encode(_query, params, _opts) do
MyXQL.Protocol.encode_params(params)
end
def decode(_query, result, _opts) do
result
end
end
defimpl String.Chars, for: [MyXQL.Query, MyXQL.Queries] do
def to_string(%{statement: statement}) do
IO.iodata_to_binary(statement)
end
end
|
lib/myxql/query.ex
| 0.89151
| 0.716467
|
query.ex
|
starcoder
|
defmodule Timber.Config do
@application :timber
@default_http_body_max_bytes 2048
@doc """
Your Timber application API key. This can be obtained after you create your
application in https://app.timber.io
# Example
```elixir
config :timber, :api_key, "<KEY>"
```
"""
def api_key do
case Application.get_env(@application, :api_key) do
{:system, env_var_name} -> System.get_env(env_var_name)
api_key when is_binary(api_key) -> api_key
_else -> nil
end
end
@doc """
Helpful to inspect internal Timber activity; a useful debugging utility.
If specified, Timber will write messages to this device. We cannot use the
standard Logger directly because it would create an infinite loop.
"""
def debug_io_device do
Application.get_env(@application, :debug_io_device)
end
@doc """
Change the name of the `Logger` metadata key that Timber uses for events.
By default, this is `:event`
# Example
```elixir
config :timber, :event_key, :timber_event
Logger.info("test", timber_event: my_event)
```
"""
def event_key, do: Application.get_env(@application, :event_key, :event)
@doc """
Allows for the sanitizations of custom header keys. This should be used to
ensure sensitive data, such as API keys, do not get logged.
**Note, the keys passed must be lowercase!**
Timber normalizes headers to be downcased before comparing them here. For
performance reasons it is advised that you pass lower cased keys.
# Example
```elixir
config :timber, :header_keys_to_sanitize, ["my-sensitive-header-name"]
```
"""
def header_keys_to_sanitize, do: Application.get_env(@application, :header_keys_to_sanitize, [])
@doc """
Configuration for the `:body` byte size limit in the `Timber.Events.HTTP*` events.
Bodies that exceed this limit will be truncated to this byte limit. The default is
`2048` with a maximum allowed value of `8192`.
# Example
```elixir
config :timber, :http_body_size_limit, 2048
```
"""
def http_body_size_limit,
do: Application.get_env(@application, :http_body_size_limit, @default_http_body_max_bytes)
@doc """
Alternate URL for delivering logs. This is helpful if you want to use a proxy,
for example.
# Example
```elixir
config :timber, :http_url, "https://192.168.127.12"
```
"""
def http_url, do: Application.get_env(@application, :http_url)
@doc """
Specify a different JSON encoder function. Timber uses `Poison` by default.
The specified function must take any data structure and return `iodata`. It
should raise on encode failures.
# Example
```elixir
config :timber, :json_encoder, fn map -> encode(map) end
```
"""
@spec json_encoder() :: (any -> iodata)
def json_encoder,
do: Application.get_env(@application, :json_encoder, &Poison.encode_to_iodata!/1)
@doc """
Unfortunately the `Elixir.Logger` produces timestamps with microsecond prevision.
In a high volume system, this can produce logs with matching timestamps, making it
impossible to preseve the order of the logs. By enabling this, Timber will discard
the default `Elixir.Logger` timestamps and use it's own with nanosecond precision.
# Example
```elixir
config :timber, :nanosecond_timestamps, true
```
"""
@spec use_nanosecond_timestamps? :: boolean
def use_nanosecond_timestamps? do
Application.get_env(@application, :nanosecond_timestamps, true)
end
@doc """
Specify the log level that phoenix log lines write to. Such as template renders.
# Example
```elixir
config :timber, :instrumentation_level, :info
```
"""
@spec phoenix_instrumentation_level(atom) :: atom
def phoenix_instrumentation_level(default) do
Application.get_env(@application, :instrumentation_level, default)
end
def capture_errors?, do: Application.get_env(@application, :capture_errors, false)
def disable_tty?,
do: Application.get_env(@application, :disable_kernel_error_tty, capture_errors?())
end
|
lib/timber/config.ex
| 0.870418
| 0.773516
|
config.ex
|
starcoder
|
defmodule Strap do
@moduledoc """
A module for using SRP (Secure Remote Password) versions 6 and 6a in Elixir.
"""
@type hash_fn :: (iodata -> binary)
@type hash_types :: :sha | :sha256 | atom
@type srp_version :: :srp6 | :srp6a
@type protocol :: {srp_version, binary, non_neg_integer, non_neg_integer, hash_fn}
@type client :: {:client, protocol, non_neg_integer, non_neg_integer, non_neg_integer}
@type server :: {:server, protocol, non_neg_integer, non_neg_integer, non_neg_integer}
@type bin_number :: non_neg_integer | binary
@doc """
Creates a protocol structure.
## Parameters
- srp_version: Either `:srp6` or `:srp6a`
- prime: A binary string representing the prime `N` value
- generator: The generator `g` integer
- hash: One of the hash atoms supported by `:crypto.hash/2` or a
`fn/1` that takes an `t:iodata` value and returns a binary
hash of that value.
## Returns
A protocol structure, for use in `server/3` or `client/5`.
"""
@spec protocol(srp_version, binary, bin_number, hash_types) :: protocol
def protocol(version, prime, generator, hash \\ :sha)
when version in [:srp6, :srp6a] do
hash = hash_fn(hash)
k = gen_k(version, prime, generator, hash)
g = to_int(generator)
{version, prime, g, k, hash}
end
@doc """
Creates a server structure.
## Parameters
- protocol: a protocol structure created by `protocol/4`.
- verifier: the verifier value, either `t:integer` or `t:binary`.
- private: the private key for the server; if not provided, a
256-bit secure random value will be generated.
## Returns
A server structure, for use with `public_key/1` and `session_key/2`.
"""
@spec server(protocol, bin_number, bin_number) :: server
def server(protocol, verifier, private \\ rand_bytes()) do
v = to_int(verifier)
b_priv = to_int(private)
b_pub = gen_b_pub(protocol, v, b_priv)
{:server, protocol, v, b_priv, b_pub}
end
@doc """
Creates a client structure.
## Parameters
- protocol: a protocol structure created by `protocol/4`.
- username: a `t:String.t` or `t:binary` username.
- password: a `t:String.t` or `t:binary` password.
- salt: the salt, `t:String.t` or `t:binary`, as provided from
the server.
- private: the private key for the client; if not provided, a
256-bit secure random value will be generated.
## Returns
A client structure, for use with `public_key/1` and `session_key/2`.
## Notes
The username and password are not stored in the resulting structure,
but a hash of their values _is_ stored.
"""
@spec client(protocol, binary, binary, bin_number, bin_number) :: client
def client(protocol, username, password, salt, private \\ rand_bytes()) do
{_ver, _n, _g, _k, hash} = protocol
x = gen_x(username, password, salt, hash)
a_priv = to_int(private)
a_pub = gen_a_pub(protocol, a_priv)
{:client, protocol, x, a_priv, a_pub}
end
@doc """
Returns the public key for a given client or server.
## Parameters
- client_server: either a client or server structure, from which
the public key will be produced.
## Returns
A binary representation of the public key.
"""
@spec public_value(client | server) :: binary
def public_value({:client, _proto, _x, _a_priv, a_pub}), do: to_bin(a_pub)
def public_value({:server, _proto, _v, _b_priv, b_pub}), do: to_bin(b_pub)
@doc """
Generates a session key for communication with the remote counterparty.
## Parameters
- client_server: either a client or server structure.
- counterparty_public: the counterparty's public value.
## Returns
Either:
- `{:ok, session_key}`: if the session key creation was successful
- `{:error, reason}`: if the session key creation was unsuccesful
Session key creation can be unsuccessful if certain mathematical properties
do not hold, compromising the security of unshared secrets or future
communication.
"""
@spec session_key(client | server, bin_number) :: {:error, atom} | {:ok, binary}
def session_key({:server, protocol, v, b_priv, b_pub}, client_public) do
# u = SHA1(PAD(A) | PAD(B))
# <premaster secret> = (A * v^u) ^ b % N
{_ver, n, _g, _k, hash} = protocol
n_int = to_int(n)
a_pub = to_int(client_public)
case rem(a_pub, n_int) do
0 -> {:error, :invalid_parameters}
_ ->
u = gen_u(n, a_pub, b_pub, hash)
case u do
0 -> {:error, :invalid_parameters}
_ ->
v_exp_u = to_int(pow_mod(v, u, n))
key = pow_mod(a_pub * v_exp_u, b_priv, n)
{:ok, key}
end
end
end
def session_key({:client, protocol, x, a_priv, a_pub}, server_public) do
# RFC5054
# <premaster secret> = (B - (k * g^x)) ^ (a + (u * x)) % N
{_ver, n, g, k, hash} = protocol
n_int = to_int(n)
b_pub = to_int(server_public)
case rem(b_pub, n_int) do
0 -> {:error, :invalid_parameters}
_ ->
u = gen_u(n, a_pub, b_pub, hash)
case u do
0 -> {:error, :invalid_parameters}
_ ->
base = b_pub + n_int - rem(k * to_int(pow_mod(g, x, n)), n_int)
exp = a_priv + u * x
key = pow_mod(base, exp, n)
{:ok, key}
end
end
end
@doc """
Creates a verifier value that could be sent to the server, e.g.
during account creation, without ever sharing the user password.
## Parameters
- client: a client, created previously.
## Returns
A binary string of the verifier.
"""
@spec verifier(client) :: binary
def verifier({:client, protocol, x, _a_priv, _a_pub}) do
# x = SHA1(s | SHA1(I | ":" | P))
# v = g^x % N
{_ver, n, g, _k, _hash} = protocol
pow_mod(g, x, n)
end
@doc """
Same as `verifier/1`, but can be used only with a protocol,
not a full client. Could be used, e.g. on the server if
the server is supposed to verify characteristics of the user's
password before creating a verifier.
## Parameters
- protocol: a protocol object created with `protocol/4`.
- username: the username.
- password: the password.
- salt: the salt.
## Returns
A binary string of the verifier.
"""
@spec verifier(protocol, binary, binary, bin_number) :: binary
def verifier(protocol, username, password, salt) do
{_ver, n, g, _k, hash} = protocol
x = gen_x(username, password, salt, hash)
pow_mod(g, x, n)
end
# Helper macro to convert large string-formatted hex values to
# binstrings at compile time
@spec hex_to_bin(String.t) :: binary
defmacrop hex_to_bin(hex) do
{:ok, val} =
hex
|> String.replace(~r/\s/m, "")
|> String.upcase()
|> Base.decode16()
val
end
@doc """
Returns known-good primes and generators as defined in RFC5054.
The following bit-sizes are defined: 1024, 1536, 2048, 3072, 4096,
6144, 8192.
## Parameters
- bit_size: the size in bits of the prime group
## Returns
Tuple of the form `{<<prime :: binary>>, generator}`
"""
@spec prime_group(pos_integer) :: {binary, pos_integer}
def prime_group(1024) do
{
hex_to_bin("""
EEAF0AB9 ADB38DD6 9C33F80A FA8FC5E8 60726187 75FF3C0B 9EA2314C
9C256576 D674DF74 96EA81D3 383B4813 D692C6E0 E0D5D8E2 50B98BE4
8E495C1D 6089DAD1 5DC7D7B4 6154D6B6 CE8EF4AD 69B15D49 82559B29
7BCF1885 C529F566 660E57EC 68EDBC3C 05726CC0 2FD4CBF4 976EAA9A
FD5138FE 8376435B 9FC61D2F C0EB06E3
"""),
2
}
end
def prime_group(1536) do
{
hex_to_bin("""
9DEF3CAF B939277A B1F12A86 17A47BBB DBA51DF4 99AC4C80 BEEEA961
4B19CC4D 5F4F5F55 6E27CBDE 51C6A94B E4607A29 1558903B A0D0F843
80B655BB 9A22E8DC DF028A7C EC67F0D0 8134B1C8 B9798914 9B609E0B
E3BAB63D 47548381 DBC5B1FC 764E3F4B 53DD9DA1 158BFD3E 2B9C8CF5
6EDF0195 39349627 DB2FD53D 24B7C486 65772E43 7D6C7F8C E442734A
F7CCB7AE 837C264A E3A9BEB8 7F8A2FE9 B8B5292E 5A021FFF 5E91479E
8CE7A28C 2442C6F3 15180F93 499A234D CF76E3FE D135F9BB
"""),
2
}
end
def prime_group(2048) do
{
hex_to_bin("""
AC6BDB41 324A9A9B F166DE5E 1389582F AF72B665 1987EE07 FC319294
3DB56050 A37329CB B4A099ED 8193E075 7767A13D D52312AB 4B03310D
CD7F48A9 DA04FD50 E8083969 EDB767B0 CF609517 9A163AB3 661A05FB
D5FAAAE8 2918A996 2F0B93B8 55F97993 EC975EEA A80D740A DBF4FF74
7359D041 D5C33EA7 1D281E44 6B14773B CA97B43A 23FB8016 76BD207A
436C6481 F1D2B907 8717461A 5B9D32E6 88F87748 544523B5 24B0D57D
5EA77A27 75D2ECFA 032CFBDB F52FB378 61602790 04E57AE6 AF874E73
03CE5329 9CCC041C 7BC308D8 2A5698F3 A8D0C382 71AE35F8 E9DBFBB6
94B5C803 D89F7AE4 35DE236D 525F5475 9B65E372 FCD68EF2 0FA7111F
9E4AFF73
"""),
2
}
end
def prime_group(3072) do
{
hex_to_bin("""
FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1 29024E08
8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD EF9519B3 CD3A431B
302B0A6D F25F1437 4FE1356D 6D51C245 E485B576 625E7EC6 F44C42E9
A637ED6B 0BFF5CB6 F406B7ED EE386BFB 5A899FA5 AE9F2411 7C4B1FE6
49286651 ECE45B3D C2007CB8 A163BF05 98DA4836 1C55D39A 69163FA8
FD24CF5F 83655D23 DCA3AD96 1C62F356 208552BB 9ED52907 7096966D
670C354E 4ABC9804 F1746C08 CA18217C 32905E46 2E36CE3B E39E772C
180E8603 9B2783A2 EC07A28F B5C55DF0 6F4C52C9 DE2BCBF6 95581718
3995497C EA956AE5 15D22618 98FA0510 15728E5A 8AAAC42D AD33170D
04507A33 A85521AB DF1CBA64 ECFB8504 58DBEF0A 8AEA7157 5D060C7D
B3970F85 A6E1E4C7 ABF5AE8C DB0933D7 1E8C94E0 4A25619D CEE3D226
1AD2EE6B F12FFA06 D98A0864 D8760273 3EC86A64 521F2B18 177B200C
BBE11757 7A615D6C 770988C0 BAD946E2 08E24FA0 74E5AB31 43DB5BFC
E0FD108E 4B82D120 A93AD2CA FFFFFFFF FFFFFFFF
"""),
5
}
end
def prime_group(4096) do
{
hex_to_bin("""
FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1 29024E08
8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD EF9519B3 CD3A431B
302B0A6D F25F1437 4FE1356D 6D51C245 E485B576 625E7EC6 F44C42E9
A637ED6B 0BFF5CB6 F406B7ED EE386BFB 5A899FA5 AE9F2411 7C4B1FE6
49286651 ECE45B3D C2007CB8 A163BF05 98DA4836 1C55D39A 69163FA8
FD24CF5F 83655D23 DCA3AD96 1C62F356 208552BB 9ED52907 7096966D
670C354E 4ABC9804 F1746C08 CA18217C 32905E46 2E36CE3B E39E772C
180E8603 9B2783A2 EC07A28F B5C55DF0 6F4C52C9 DE2BCBF6 95581718
3995497C EA956AE5 15D22618 98FA0510 15728E5A 8AAAC42D AD33170D
04507A33 A85521AB DF1CBA64 ECFB8504 58DBEF0A 8AEA7157 5D060C7D
B3970F85 A6E1E4C7 ABF5AE8C DB0933D7 1E8C94E0 4A25619D CEE3D226
1AD2EE6B F12FFA06 D98A0864 D8760273 3EC86A64 521F2B18 177B200C
BBE11757 7A615D6C 770988C0 BAD946E2 08E24FA0 74E5AB31 43DB5BFC
E0FD108E 4B82D120 A9210801 1A723C12 A787E6D7 88719A10 BDBA5B26
99C32718 6AF4E23C 1A946834 B6150BDA 2583E9CA 2AD44CE8 DBBBC2DB
04DE8EF9 2E8EFC14 1FBECAA6 287C5947 4E6BC05D 99B2964F A090C3A2
233BA186 515BE7ED 1F612970 CEE2D7AF B81BDD76 2170481C D0069127
D5B05AA9 93B4EA98 8D8FDDC1 86FFB7DC 90A6C08F 4DF435C9 34063199
FFFFFFFF FFFFFFFF
"""),
5
}
end
def prime_group(6144) do
{
hex_to_bin("""
FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1 29024E08
8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD EF9519B3 CD3A431B
302B0A6D F25F1437 4FE1356D 6D51C245 E485B576 625E7EC6 F44C42E9
A637ED6B 0BFF5CB6 F406B7ED EE386BFB 5A899FA5 AE9F2411 7C4B1FE6
49286651 ECE45B3D C2007CB8 A163BF05 98DA4836 1C55D39A 69163FA8
FD24CF5F 83655D23 DCA3AD96 1C62F356 208552BB 9ED52907 7096966D
670C354E 4ABC9804 F1746C08 CA18217C 32905E46 2E36CE3B E39E772C
180E8603 9B2783A2 EC07A28F B5C55DF0 6F4C52C9 DE2BCBF6 95581718
3995497C EA956AE5 15D22618 98FA0510 15728E5A 8AAAC42D AD33170D
04507A33 A85521AB DF1CBA64 ECFB8504 58DBEF0A 8AEA7157 5D060C7D
B3970F85 A6E1E4C7 ABF5AE8C DB0933D7 1E8C94E0 4A25619D CEE3D226
1AD2EE6B F12FFA06 D98A0864 D8760273 3EC86A64 521F2B18 177B200C
BBE11757 7A615D6C 770988C0 BAD946E2 08E24FA0 74E5AB31 43DB5BFC
E0FD108E 4B82D120 A9210801 1A723C12 A787E6D7 88719A10 BDBA5B26
99C32718 6AF4E23C 1A946834 B6150BDA 2583E9CA 2AD44CE8 DBBBC2DB
04DE8EF9 2E8EFC14 1FBECAA6 287C5947 4E6BC05D 99B2964F A090C3A2
233BA186 515BE7ED 1F612970 CEE2D7AF B81BDD76 2170481C D0069127
D5B05AA9 93B4EA98 8D8FDDC1 86FFB7DC 90A6C08F 4DF435C9 34028492
36C3FAB4 D27C7026 C1D4DCB2 602646DE C9751E76 3DBA37BD F8FF9406
AD9E530E E5DB382F 413001AE B06A53ED 9027D831 179727B0 865A8918
DA3EDBEB CF9B14ED 44CE6CBA CED4BB1B DB7F1447 E6CC254B 33205151
2BD7AF42 6FB8F401 378CD2BF 5983CA01 C64B92EC F032EA15 D1721D03
F482D7CE 6E74FEF6 D55E702F 46980C82 B5A84031 900B1C9E 59E7C97F
BEC7E8F3 23A97A7E 36CC88BE 0F1D45B7 FF585AC5 4BD407B2 2B4154AA
CC8F6D7E BF48E1D8 14CC5ED2 0F8037E0 A79715EE F29BE328 06A1D58B
B7C5DA76 F550AA3D 8A1FBFF0 EB19CCB1 A313D55C DA56C9EC 2EF29632
387FE8D7 6E3C0468 043E8F66 3F4860EE 12BF2D5B 0B7474D6 E694F91E
6DCC4024 FFFFFFFF FFFFFFFF
"""),
5
}
end
def prime_group(8192) do
{
hex_to_bin("""
FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1 29024E08
8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD EF9519B3 CD3A431B
302B0A6D F25F1437 4FE1356D 6D51C245 E485B576 625E7EC6 F44C42E9
A637ED6B 0BFF5CB6 F406B7ED EE386BFB 5A899FA5 AE9F2411 7C4B1FE6
49286651 ECE45B3D C2007CB8 A163BF05 98DA4836 1C55D39A 69163FA8
FD24CF5F 83655D23 DCA3AD96 1C62F356 208552BB 9ED52907 7096966D
670C354E 4ABC9804 F1746C08 CA18217C 32905E46 2E36CE3B E39E772C
180E8603 9B2783A2 EC07A28F B5C55DF0 6F4C52C9 DE2BCBF6 95581718
3995497C EA956AE5 15D22618 98FA0510 15728E5A 8AAAC42D AD33170D
04507A33 A85521AB DF1CBA64 ECFB8504 58DBEF0A 8AEA7157 5D060C7D
B3970F85 A6E1E4C7 ABF5AE8C DB0933D7 1E8C94E0 4A25619D CEE3D226
1AD2EE6B F12FFA06 D98A0864 D8760273 3EC86A64 521F2B18 177B200C
BBE11757 7A615D6C 770988C0 BAD946E2 08E24FA0 74E5AB31 43DB5BFC
E0FD108E 4B82D120 A9210801 1A723C12 A787E6D7 88719A10 BDBA5B26
99C32718 6AF4E23C 1A946834 B6150BDA 2583E9CA 2AD44CE8 DBBBC2DB
04DE8EF9 2E8EFC14 1FBECAA6 287C5947 4E6BC05D 99B2964F A090C3A2
233BA186 515BE7ED 1F612970 CEE2D7AF B81BDD76 2170481C D0069127
D5B05AA9 93B4EA98 8D8FDDC1 86FFB7DC 90A6C08F 4DF435C9 34028492
36C3FAB4 D27C7026 C1D4DCB2 602646DE C9751E76 3DBA37BD F8FF9406
AD9E530E E5DB382F 413001AE B06A53ED 9027D831 179727B0 865A8918
DA3EDBEB CF9B14ED 44CE6CBA CED4BB1B DB7F1447 E6CC254B 33205151
2BD7AF42 6FB8F401 378CD2BF 5983CA01 C64B92EC F032EA15 D1721D03
F482D7CE 6E74FEF6 D55E702F 46980C82 B5A84031 900B1C9E 59E7C97F
BEC7E8F3 23A97A7E 36CC88BE 0F1D45B7 FF585AC5 4BD407B2 2B4154AA
CC8F6D7E BF48E1D8 14CC5ED2 0F8037E0 A79715EE F29BE328 06A1D58B
B7C5DA76 F550AA3D 8A1FBFF0 EB19CCB1 A313D55C DA56C9EC 2EF29632
387FE8D7 6E3C0468 043E8F66 3F4860EE 12BF2D5B 0B7474D6 E694F91E
6DBE1159 74A3926F 12FEE5E4 38777CB6 A932DF8C D8BEC4D0 73B931BA
3BC832B6 8D9DD300 741FA7BF 8AFC47ED 2576F693 6BA42466 3AAB639C
5AE4F568 3423B474 2BF1C978 238F16CB E39D652D E3FDB8BE FC848AD9
22222E04 A4037C07 13EB57A8 1A23F0C7 3473FC64 6CEA306B 4BCBC886
2F8385DD FA9D4B7F A2C087E8 79683303 ED5BDD3A 062B3CF5 B3A278A6
6D2A13F8 3F44F82D DF310EE0 74AB6A36 4597E899 A0255DC1 64F31CC5
0846851D F9AB4819 5DED7EA1 B1D510BD 7EE74D73 FAF36BC3 1ECFA268
359046F4 EB879F92 4009438B 481C6CD7 889A002E D5EE382B C9190DA6
FC026E47 9558E447 5677E9AA 9E3050E2 765694DF C81F56E8 80B96E71
60C980DD 98EDD3DF FFFFFFFF FFFFFFFF
"""),
19
}
end
# Internal functions
@spec gen_a_pub(protocol, non_neg_integer) :: pos_integer
defp gen_a_pub({_ver, n, g, _k, _hash}, a_priv) do
# A = g^a % N
pow_mod(g, a_priv, n)
|> to_int()
end
@spec gen_b_pub(protocol, non_neg_integer, non_neg_integer) :: pos_integer
defp gen_b_pub({_ver, n, g, k, _hash}, v, b_priv) do
# B = k*v + g^b % N
n_int = to_int(n)
rem(k * v + to_int(pow_mod(g, b_priv, n)), n_int)
end
@spec gen_k(srp_version, binary, pos_integer, hash_fn) :: non_neg_integer
defp gen_k(:srp6, _n, _g, _hash) do
# http://srp.stanford.edu/design.html
# k = 3 for legacy SRP-6
3
end
defp gen_k(:srp6a, n, g, hash) do
# RFC5054
# k = hash(N | PAD(g))
hash.([n, lpad_match(to_bin(g), n)])
|> to_int()
end
@spec gen_x(binary, binary, binary, hash_fn) :: non_neg_integer
defp gen_x(i, p, s, hash) do
# RFC5054
# x = hash(s | SHA1(I | ":" | P))
hash.([s, hash.([i, ":", p])])
|> to_int()
end
@spec gen_u(binary, non_neg_integer, non_neg_integer, hash_fn) :: non_neg_integer
defp gen_u(n, a_pub, b_pub, hash) do
# RFC5054
# u = hash(PAD(A) | PAD(B))
hash.([lpad_match(to_bin(a_pub), n),
lpad_match(to_bin(b_pub), n)])
|> to_int()
end
@spec lpad_match(binary, binary) :: binary
defp lpad_match(data, other) do
lpad_size(data, bit_size(other))
end
@spec lpad_size(binary, pos_integer) :: binary
defp lpad_size(data, width) when bit_size(data) <= width do
padding = width - bit_size(data)
<<0 :: size(padding), data :: binary>>
end
# Helpers
# Converts an atom (or function) into a hashing function
@spec hash_fn(atom | (iodata -> binary)) :: (iodata -> binary)
defp hash_fn(h) when is_atom(h), do: fn x -> :crypto.hash(h, x) end
defp hash_fn(h) when is_function(h), do: h
# Sugar
@spec pow_mod(bin_number, bin_number, bin_number) :: binary
defp pow_mod(n, e, m), do: :crypto.mod_pow(n, e, m)
# Sugar
@spec rand_bytes(pos_integer) :: binary
defp rand_bytes(size \\ 32), do: :crypto.strong_rand_bytes(size)
@spec to_bin(non_neg_integer | binary) :: binary
defp to_bin(val) when is_bitstring(val), do: val
defp to_bin(val) when is_integer(val), do: :binary.encode_unsigned(val)
@spec to_int(binary | non_neg_integer) :: non_neg_integer
defp to_int(val) when is_integer(val), do: val
defp to_int(val) when is_bitstring(val), do: :binary.decode_unsigned(val)
end
|
lib/strap.ex
| 0.919814
| 0.550728
|
strap.ex
|
starcoder
|
import Kernel, except: [apply: 2]
defmodule Ecto.Query.Builder.Distinct do
@moduledoc false
alias Ecto.Query.Builder
@doc """
Escapes a list of quoted expressions.
iex> escape(quote do true end, {[], :acc}, [], __ENV__)
{true, {[], :acc}}
iex> escape(quote do [x.x, 13] end, {[], :acc}, [x: 0], __ENV__)
{[asc: {:{}, [], [{:{}, [], [:., [], [{:{}, [], [:&, [], [0]]}, :x]]}, [], []]},
asc: 13],
{[], :acc}}
"""
@spec escape(Macro.t, {list, term}, Keyword.t, Macro.Env.t) :: {Macro.t, {list, term}}
def escape(expr, params_acc, _vars, _env) when is_boolean(expr) do
{expr, params_acc}
end
def escape({:^, _, [expr]}, params_acc, _vars, _env) do
{quote(do: Ecto.Query.Builder.Distinct.distinct!(unquote(expr))), params_acc}
end
def escape(expr, params_acc, vars, env) do
Ecto.Query.Builder.OrderBy.escape(:distinct, expr, params_acc, vars, env)
end
@doc """
Called at runtime to verify distinct.
"""
def distinct!(distinct) when is_boolean(distinct) do
distinct
end
def distinct!(distinct) do
Ecto.Query.Builder.OrderBy.order_by!(:distinct, distinct)
end
@doc """
Builds a quoted expression.
The quoted expression should evaluate to a query at runtime.
If possible, it does all calculations at compile time to avoid
runtime work.
"""
@spec build(Macro.t, [Macro.t], Macro.t, Macro.Env.t) :: Macro.t
def build(query, binding, expr, env) do
{query, binding} = Builder.escape_binding(query, binding, env)
{expr, {params, _}} = escape(expr, {[], :acc}, binding, env)
params = Builder.escape_params(params)
distinct = quote do: %Ecto.Query.QueryExpr{
expr: unquote(expr),
params: unquote(params),
file: unquote(env.file),
line: unquote(env.line)}
Builder.apply_query(query, __MODULE__, [distinct], env)
end
@doc """
The callback applied by `build/4` to build the query.
"""
@spec apply(Ecto.Queryable.t, term) :: Ecto.Query.t
def apply(%Ecto.Query{distinct: nil} = query, expr) do
%{query | distinct: expr}
end
def apply(%Ecto.Query{}, _expr) do
Builder.error! "only one distinct expression is allowed in query"
end
def apply(query, expr) do
apply(Ecto.Queryable.to_query(query), expr)
end
end
|
lib/ecto/query/builder/distinct.ex
| 0.738009
| 0.440168
|
distinct.ex
|
starcoder
|
defmodule Jan.GameServer do
@moduledoc """
This module is responsible for managing a single game room.
Its state is the list of players in this single game, with their scores and weapons.
"""
use GenServer
def start_link(room_id) do
GenServer.start_link(__MODULE__,
[],
name: via_tuple(room_id))
end
defp via_tuple(room_id) do
{:via, Jan.Registry, {:game_server, room_id}}
end
@doc """
Adds a player to the game with the given `player_name`.
Players start with a `score` of 0 and an empty `weapon`
Returns `:ok` or `{:error, message}`
"""
def add_player(room_id, player_name) do
case GenServer.call(via_tuple(room_id), {:add_player, player_name}) do
:ok -> :ok
:duplicate -> {:error, "There is already a '#{player_name}' in this room"}
:empty -> {:error, "The name must be filled in"}
:game_already_started -> {:error, "There's a game being played in this room already"}
end
end
def remove_player(room_id, player_name) do
GenServer.cast(via_tuple(room_id), {:remove_player, player_name})
end
@doc """
Finds the user with the given `player_name` and sets its `weapon`.
If all the users have played already, it will try to find the winner,
and can return `{:winner, player_name}` or simply `:draw`.
"""
def choose_weapon(room_id, player_name, weapon) do
case GenServer.call(via_tuple(room_id), {:choose_weapon, player_name, weapon}) do
{:winner, winner} ->
GenServer.cast(via_tuple(room_id), {:increment_winner_score, winner})
{:winner, winner}
other_result -> other_result
end
end
@doc """
Resets the game by settings all the users' weapons to empty String.
"""
def reset_game(room_id) do
GenServer.cast(via_tuple(room_id), :reset_game)
end
@doc """
Returns the list of all the players that are in this game, with their
score and weapon.
"""
def get_players_list(room_id) do
GenServer.call(via_tuple(room_id), :players_list)
end
## SERVER
def init(_) do
{:ok, []}
end
def handle_cast({:remove_player, player_name}, state) do
new_state = Enum.filter(state, &(&1.name != player_name))
if Enum.empty?(new_state) do
{:stop, :normal, new_state}
else
{:noreply, new_state}
end
end
def handle_cast(:reset_game, state) do
{:noreply, Enum.map(state, &(%{&1 | weapon: ""}))}
end
def handle_cast({:increment_winner_score, winner}, state) do
new_state = Enum.map(state, fn player ->
if player.name == winner.name, do: player = %{player | score: player.score + 1}
player
end)
{:noreply, new_state}
end
def handle_call({:add_player, player_name}, _from, state) do
cond do
Enum.any?(state, &(String.downcase(&1.name) == String.downcase(player_name))) ->
{:reply, :duplicate, state}
String.strip(player_name) == "" ->
{:reply, :empty, state}
Enum.any?(state, &(&1.weapon != "")) ->
{:reply, :game_already_started, state}
true ->
{:reply, :ok, [%{name: player_name, weapon: "", score: 0} | state]}
end
end
def handle_call(:players_list, _from, state) do
{:reply, state, state}
end
def handle_call({:choose_weapon, player_name, weapon}, _from, state) do
new_state = Enum.map(state, fn player ->
if player.name == player_name, do: player = %{player | weapon: weapon}
player
end)
{:reply, answer_for(new_state), new_state}
end
defp answer_for(players) do
all_players_moved = players |> Enum.map(&(&1.weapon)) |> Enum.all?(&(&1 != ""))
if all_players_moved, do: find_winner(players)
end
defp find_winner(players) do
is_winner = fn current -> beat_all?(current.weapon, List.delete(players, current)) end
winner = Enum.find(players, is_winner)
if winner, do: {:winner, winner}, else: :draw
end
@doc """
Returns `true` if given `weapon` beats the weapons of all the other `players`.
This is useful when we want to support more than 2 players, meaning that for
any number of player, one is the winner if its weapon beats all the others'.
"""
defp beat_all?(weapon, players) do
this_beat_that = %{"rock" => "scissors",
"paper" => "rock",
"scissors" => "paper"}
weapon_that_i_beat = Map.get(this_beat_that, weapon)
Enum.all?(players, &(&1.weapon == weapon_that_i_beat))
end
end
|
lib/jan/game_server.ex
| 0.714728
| 0.493775
|
game_server.ex
|
starcoder
|
defmodule Calendar.NaiveDateTime.Parse do
import Calendar.ParseUtil
@doc """
Parse ASN.1 GeneralizedTime.
Returns tuple with {:ok, [NaiveDateTime], UTC offset (optional)}
## Examples
iex> "19851106210627.3" |> asn1_generalized
{:ok, %NaiveDateTime{year: 1985, month: 11, day: 6, hour: 21, minute: 6, second: 27, microsecond: {300_000, 1}}, nil}
iex> "19851106210627.3Z" |> asn1_generalized
{:ok, %NaiveDateTime{year: 1985, month: 11, day: 6, hour: 21, minute: 6, second: 27, microsecond: {300_000, 1}}, 0}
iex> "19851106210627.3-5000" |> asn1_generalized
{:ok, %NaiveDateTime{year: 1985, month: 11, day: 6, hour: 21, minute: 6, second: 27, microsecond: {300_000, 1}}, -180000}
"""
def asn1_generalized(string) do
captured = string |> capture_generalized_time_string
if captured do
parse_captured_iso8601(captured, captured["z"], captured["offset_hours"], captured["offset_mins"])
else
{:bad_format, nil, nil}
end
end
defp capture_generalized_time_string(string) do
~r/(?<year>[\d]{4})(?<month>[\d]{2})(?<day>[\d]{2})(?<hour>[\d]{2})(?<min>[\d]{2})(?<sec>[\d]{2})(\.(?<fraction>[\d]+))?(?<z>[zZ])?((?<offset_sign>[\+\-])(?<offset_hours>[\d]{1,2})(?<offset_mins>[\d]{2}))?/
|> Regex.named_captures(string)
end
@doc """
Parses a "C time" string.
## Examples
iex> Calendar.NaiveDateTime.Parse.asctime("Wed Apr 9 07:53:03 2003")
{:ok, %NaiveDateTime{year: 2003, month: 4, day: 9, hour: 7, minute: 53, second: 3, microsecond: {0, 0}}}
iex> asctime("Thu, Apr 10 07:53:03 2003")
{:ok, %NaiveDateTime{year: 2003, month: 4, day: 10, hour: 7, minute: 53, second: 3, microsecond: {0, 0}}}
"""
def asctime(string) do
cap = string |> capture_asctime_string
month_num = month_number_for_month_name(cap["month"])
Calendar.NaiveDateTime.from_erl({{cap["year"]|>to_int, month_num, cap["day"]|>to_int}, {cap["hour"]|>to_int, cap["min"]|>to_int, cap["sec"]|>to_int}})
end
@doc """
Like `asctime/1`, but returns the result without tagging it with :ok.
## Examples
iex> asctime!("Wed Apr 9 07:53:03 2003")
%NaiveDateTime{year: 2003, month: 4, day: 9, hour: 7, minute: 53, second: 3, microsecond: {0, 0}}
iex> asctime!("Thu, Apr 10 07:53:03 2003")
%NaiveDateTime{year: 2003, month: 4, day: 10, hour: 7, minute: 53, second: 3, microsecond: {0, 0}}
"""
def asctime!(string) do
{:ok, result} = asctime(string)
result
end
defp capture_asctime_string(string) do
~r/(?<month>[^\d]{3})[\s]+(?<day>[\d]{1,2})[\s]+(?<hour>[\d]{2})[^\d]?(?<min>[\d]{2})[^\d]?(?<sec>[\d]{2})[^\d]?(?<year>[\d]{4})/
|> Regex.named_captures(string)
end
@doc """
Parses an ISO8601 datetime. Returns {:ok, NaiveDateTime struct, UTC offset in secods}
In case there is no UTC offset, the third element of the tuple will be nil.
## Examples
# With offset
iex> iso8601("1996-12-19T16:39:57-0200")
{:ok, %NaiveDateTime{year: 1996, month: 12, day: 19, hour: 16, minute: 39, second: 57, microsecond: {0, 0}}, -7200}
# Without offset
iex> iso8601("1996-12-19T16:39:57")
{:ok, %NaiveDateTime{year: 1996, month: 12, day: 19, hour: 16, minute: 39, second: 57, microsecond: {0, 0}}, nil}
# With fractional seconds
iex> iso8601("1996-12-19T16:39:57.123")
{:ok, %NaiveDateTime{year: 1996, month: 12, day: 19, hour: 16, minute: 39, second: 57, microsecond: {123000, 3}}, nil}
# With Z denoting 0 offset
iex> iso8601("1996-12-19T16:39:57Z")
{:ok, %NaiveDateTime{year: 1996, month: 12, day: 19, hour: 16, minute: 39, second: 57, microsecond: {0, 0}}, 0}
# Invalid date
iex> iso8601("1996-13-19T16:39:57Z")
{:error, :invalid_datetime, nil}
"""
def iso8601(string) do
captured = string |> capture_iso8601_string
if captured do
parse_captured_iso8601(captured, captured["z"], captured["offset_hours"], captured["offset_mins"])
else
{:bad_format, nil, nil}
end
end
defp parse_captured_iso8601(captured, z, _, _) when z != "" do
parse_captured_iso8601(captured, "", "00", "00")
end
defp parse_captured_iso8601(captured, _z, "", "") do
{tag, ndt} = Calendar.NaiveDateTime.from_erl(erl_date_time_from_regex_map(captured), parse_fraction(captured["fraction"]))
{tag, ndt, nil}
end
defp parse_captured_iso8601(captured, _z, offset_hours, offset_mins) do
{tag, ndt} = Calendar.NaiveDateTime.from_erl(erl_date_time_from_regex_map(captured), parse_fraction(captured["fraction"]))
if tag == :ok do
{:ok, offset_in_seconds} = offset_from_captured(captured, offset_hours, offset_mins)
{tag, ndt, offset_in_seconds}
else
{tag, ndt, nil}
end
end
defp offset_from_captured(captured, offset_hours, offset_mins) do
offset_in_secs = hours_mins_to_secs!(offset_hours, offset_mins)
offset_in_secs = case captured["offset_sign"] do
"-" -> offset_in_secs*-1
_ -> offset_in_secs
end
{:ok, offset_in_secs}
end
defp capture_iso8601_string(string) do
~r/(?<year>[\d]{4})[^\d]?(?<month>[\d]{2})[^\d]?(?<day>[\d]{2})[^\d](?<hour>[\d]{2})[^\d]?(?<min>[\d]{2})[^\d]?(?<sec>[\d]{2})(\.(?<fraction>[\d]+))?(?<z>[zZ])?((?<offset_sign>[\+\-])(?<offset_hours>[\d]{1,2}):?(?<offset_mins>[\d]{2}))?/
|> Regex.named_captures(string)
end
defp erl_date_time_from_regex_map(mapped) do
erl_date_time_from_strings({{mapped["year"],mapped["month"],mapped["day"]},{mapped["hour"],mapped["min"],mapped["sec"]}})
end
defp erl_date_time_from_strings({{year, month, date},{hour, min, sec}}) do
{ {year|>to_int, month|>to_int, date|>to_int},
{hour|>to_int, min|>to_int, sec|>to_int} }
end
defp parse_fraction(""), do: {0, 0}
# parse and return microseconds
defp parse_fraction(string) do
usec = String.slice(string, 0..5)
|> String.ljust(6, ?0)
|> Integer.parse
|> elem(0)
{usec, String.length(string)}
end
end
|
data/web/deps/calendar/lib/calendar/naive_date_time/parse.ex
| 0.831417
| 0.471649
|
parse.ex
|
starcoder
|
defmodule SegmentTree do
@moduledoc """
Data structure to compute efficiently operation on ranges.
## Problem
Given a range [0, n-1] of n values, we want to efficiently calculate an operation (e.g. sum), a naive approach is to
iterate through a list but we will get the answer in O(n) time.
list = [a, b, ..., z]
list_cut = cut(list, k, n)
sum = Enum.sum(list_cut)
If many random ranges must be computed on this list we can use a more efficient approach using SegmentTree
segment_tree = SegmentTree.new(n, &Kernel.+/2)
segment_tree = populate(segment_tree, list)
sum = SegmentTree.aggregate(segment_tree, k, n)
We will then be able to get an answer in O(log(n)) time.
"""
defstruct [:max_index, :aggregate_fun, :tree, :default]
@type t :: %__MODULE__{
max_index: non_neg_integer,
aggregate_fun: (term, term -> term),
tree: map,
default: term
}
@doc """
Create a new SegmentTree structure
max_index must be higher than any index used in the range
## Examples
SegmentTree.new(1_000, &Kernel.+/2)
#=> %SegmentTree{default: 0, tree: %{}, aggregate_fun: &Kernel.+/2, max_index: 1_023}
"""
@spec new(non_neg_integer, (term, term -> term), term) :: SegmentTree.t
def new(max_index, aggregate_fun, default \\ 0) do
max_index = round(:math.pow(2, round(:math.ceil(:math.log2(max_index))))) - 1
%SegmentTree{max_index: max_index, aggregate_fun: aggregate_fun, tree: %{}, default: default}
end
@doc """
Insert element in SegmentTree at the given index
## Examples
SegmentTree.put(%SegmentTree{...}, 10, 29)
#=> %SegmentTree{...}
"""
@spec put(SegmentTree.t, non_neg_integer, term) :: SegmentTree.t
def put(segment_tree, index, value) do
put(segment_tree, 0, 0, segment_tree.max_index, index, value)
end
defp put(segment_tree, tree_index, min, max, index, value) do
new_tree =
Map.update(segment_tree.tree, tree_index, value, &segment_tree.aggregate_fun.(&1, value))
segment_tree = %{segment_tree | tree: new_tree}
mid = min + round((max - min + 1) / 2)
cond do
min == max -> segment_tree
index < mid -> put(segment_tree, tree_index * 2 + 1, min, mid - 1, index, value)
index >= mid -> put(segment_tree, tree_index * 2 + 2, mid, max, index, value)
end
end
@doc """
Compute range value of a SegmentTree between min and max
## Examples
SegmentTree.aggregate(%SegmentTree{...}, 10, 29)
#=> %SegmentTree{...}
"""
@spec aggregate(SegmentTree.t, non_neg_integer, non_neg_integer) :: term
def aggregate(segment_tree, range_min, range_max) do
aggregate(segment_tree, 0, 0, segment_tree.max_index, range_min, range_max)
end
defp aggregate(segment_tree, _, min, max, range_min, range_max)
when max < range_min or min > range_max,
do: segment_tree.default
defp aggregate(%{tree: tree, default: default}, tree_index, min, max, range_min, range_max)
when range_min <= min and max <= range_max,
do: Map.get(tree, tree_index, default)
defp aggregate(segment_tree, tree_index, min, max, range_min, range_max) do
mid = min + round((max - min + 1) / 2)
agg1 = aggregate(segment_tree, tree_index * 2 + 1, min, mid - 1, range_min, range_max)
agg2 = aggregate(segment_tree, tree_index * 2 + 2, mid, max, range_min, range_max)
segment_tree.aggregate_fun.(agg1, agg2)
end
end
|
lib/segment_tree.ex
| 0.840357
| 0.79956
|
segment_tree.ex
|
starcoder
|
defmodule Absinthe.Federation.Notation do
@moduledoc """
Module that includes macros for annotating a schema with federation directives.
## Example
defmodule MyApp.MySchema.Types do
use Absinthe.Schema.Notation
+ use Absinthe.Federation.Notation
end
"""
defmacro __using__(_opts) do
notations()
end
@spec notations() :: Macro.t()
defp notations() do
quote do
import Absinthe.Federation.Notation, only: :macros
end
end
@doc """
Adds a `@key` directive to the type which indicates a combination of fields
that can be used to uniquely identify and fetch an object or interface.
This allows the type to be extended by other services.
A string rather than atom is used here to support composite keys e.g. `id organization { id }`
## Example
object :user do
key_fields("id")
field :id, non_null(:id)
end
## SDL Output
type User @key(fields: "id") {
id: ID!
}
"""
defmacro key_fields(fields) when is_binary(fields) or is_list(fields) do
quote do
meta :key_fields, unquote(fields)
end
end
@doc """
Adds the `@external` directive to the field which marks a field as owned by another service.
This allows service A to use fields from service B while also knowing at runtime the types of that field.
## Example
object :user do
extends()
key_fields("email")
field :email, :string do
external()
end
field :reviews, list_of(:review)
end
## SDL Output
# extended from the Users service
type User @key(fields: "email") @extends {
email: String @external
reviews: [Review]
}
This type extension in the Reviews service extends the User type from the Users service.
It extends it for the purpose of adding a new field called reviews, which returns a list of `Review`s.
"""
defmacro external() do
quote do
meta :external, true
end
end
@doc """
Adds the `@requires` directive which is used to annotate the required input fieldset from a base type for a resolver.
It is used to develop a query plan where the required fields may not be needed by the client,
but the service may need additional information from other services.
## Example
object :user do
extends()
key_fields("id")
field :id, non_null(:id) do
external()
end
field :email, :string do
external()
end
field :reviews, list_of(:review) do
requires_fields("email")
end
end
## SDL Output
# extended from the Users service
type User @key(fields: "id") @extends {
id: ID! @external
email: String @external
reviews: [Review] @requires(fields: "email")
}
In this case, the Reviews service adds new capabilities to the `User` type by providing
a list of `reviews` related to a `User`. In order to fetch these `reviews`, the Reviews service needs
to know the `email` of the `User` from the Users service in order to look up the `reviews`.
This means the `reviews` field / resolver requires the `email` field from the base `User` type.
"""
defmacro requires_fields(fields) when is_binary(fields) do
quote do
meta :requires_fields, unquote(fields)
end
end
@doc """
Adds the `@provides` directive which is used to annotate the expected returned fieldset
from a field on a base type that is guaranteed to be selectable by the gateway.
## Example
object :review do
key_fields("id")
field :id, non_null(:id)
field :product, :product do
provides_fields("name")
end
end
object :product do
extends()
key_fields("upc")
field :upc, :string do
external()
end
field :name, :string do
external()
end
end
## SDL Output
type Review @key(fields: "id") {
product: Product @provides(fields: "name")
}
type Product @key(fields: "upc") @extends {
upc: String @external
name: String @external
}
When fetching `Review.product` from the Reviews service,
it is possible to request the `name` with the expectation that the Reviews service
can provide it when going from review to product. `Product.name` is an external field
on an external type which is why the local type extension of `Product` and annotation of `name` is required.
"""
defmacro provides_fields(fields) when is_binary(fields) do
quote do
meta :provides_fields, unquote(fields)
end
end
@doc """
Adds the `@extends` directive to the type to indicate that the type as owned by another service.
## Example
object :user do
extends()
key_fields("id")
field :id, non_null(:id)
end
## SDL Output
type User @key(fields: "id") @extends {
id: ID!
}
"""
defmacro extends() do
quote do
meta :extends, true
end
end
end
|
lib/absinthe/federation/notation.ex
| 0.812123
| 0.430267
|
notation.ex
|
starcoder
|
defmodule Elixir99.Lists do
@moduledoc """
Documentation for `Elixir99.Lists`.
"""
@doc """
Hello world.
## Examples
iex> Elixir99.Lists.hello()
:world
"""
def hello do
:world
end
def last(list) when length(list) >= 1 do
[head | tail] = list
case tail do
[] -> head
_ -> last(tail)
end
end
def but_last(list) when length(list) >= 2 do
[_head | tail] = list
case tail do
[but, _] -> but
_ -> but_last(tail)
end
end
def element_at(list, index) when index < length(list) do
[head | tail] = list
case index do
0 -> head
_ -> element_at(tail, index-1)
end
end
def my_length(list, len \\ 0) do
case list do
[] -> 0
[_head | tail] ->
case tail do
[] -> len + 1
_ -> my_length(tail, len+1)
end
end
end
def reverse(list, acc \\ []) do
case list do
[] -> acc
[head | tail] -> reverse(tail, [head] ++ acc)
end
end
def is_palindrome(list) do
list == reverse(list)
end
def flatten(list) do
case list do
x when not is_list(x) -> [x]
[] -> []
[head | tail] -> flatten(head) ++ flatten(tail)
end
end
def compress(list, acc \\ []) do
case list do
[] -> []
[_] ->
acc ++ list
[first , second | tail] ->
if first == second do
compress([first] ++ tail, acc)
else
compress([second] ++ tail, acc ++ [first])
end
end
end
def pack(list) do
case list do
[first, second | tail] when is_list(first) ->
if hd(first) == second do
pack([first ++ [second]] ++ tail)
else
[first] ++ pack([second] ++ tail)
end
[first, second | tail] ->
if first == second do
pack([[first, second]] ++ tail)
else
[[first]] ++ pack([second] ++ tail)
end
[] -> []
[x] -> [[x]]
end
end
def encode(list, current \\ nil, acc \\ [], counter \\ 0) do
case list do
[] ->
acc ++ [{counter, current}]
[head | tail] ->
if head == current do
encode(tail, current, acc, counter+1)
else
new_acc = if counter > 0, do: acc ++ [{counter, current}], else: acc
encode(tail, head, new_acc, 1)
end
end
end
end
|
lib/elixir99_lists.ex
| 0.768473
| 0.53777
|
elixir99_lists.ex
|
starcoder
|
defmodule RateTheDubWeb.APIController do
@moduledoc """
This controller is for the read-only JSON API for ratings of anime series.
See the [API Documentation](../../../docs/API.md) for more info.
TODO caching and rate limiting?
"""
use RateTheDubWeb, :controller
alias RateTheDub.Anime
alias RateTheDub.Anime.AnimeSeries
alias RateTheDub.DubVotes
@base_attrs %{
jsonapi: %{version: "1.0"},
links: %{self: "https://ratethedub.com/"}
}
def index(conn, _params) do
conn
|> json(Map.put(@base_attrs, :data, %{}))
end
def featured(conn, _params) do
data =
Anime.get_featured()
|> Enum.map(fn %{mal_id: id} = series ->
%{
type: "series_lang_votes",
id: id,
attributes: %{
mal_id: id,
language: series.featured_in,
votes: DubVotes.count_votes_for(series.mal_id, series.featured_in)
},
links: %{self: "https://ratethedub.com/#{series.featured_in}/anime/#{id}"}
}
end)
conn
|> json(Map.put(@base_attrs, :data, data))
end
def trending(conn, _params) do
data =
Anime.get_trending()
|> Enum.map(fn [id, lang, votes] ->
%{
type: "series_lang_votes",
id: id,
attributes: %{mal_id: id, language: lang, votes: votes},
links: %{self: "https://ratethedub.com/#{lang}/anime/#{id}"}
}
end)
conn
|> json(Map.put(@base_attrs, :data, data))
end
def top(conn, _params) do
data =
Anime.get_top_rated()
|> Enum.map(fn [id, lang, votes] ->
%{
type: "series_lang_votes",
id: id,
attributes: %{mal_id: id, language: lang, votes: votes},
links: %{self: "https://ratethedub.com/#{lang}/anime/#{id}"}
}
end)
conn
|> json(Map.put(@base_attrs, :data, data))
end
def series(conn, %{"id" => id}) do
case Anime.get_anime_series(id) do
%AnimeSeries{} = series ->
resp =
%{
data: %{
type: "anime_series",
id: series.mal_id,
attributes: %{
mal_id: series.mal_id,
dubbed_in: series.dubbed_in,
votes:
series.dubbed_in
|> Enum.map(&{&1, DubVotes.count_votes_for(series.mal_id, &1)})
|> Map.new()
},
links: %{self: "https://ratethedub.com/anime/#{series.mal_id}"}
}
}
|> Enum.into(@base_attrs)
conn
|> json(resp)
nil ->
resp =
%{errors: [%{status: "404", title: "Anime Series Not Found"}]}
|> Enum.into(@base_attrs)
conn
|> put_status(:not_found)
|> json(resp)
end
end
end
|
lib/ratethedub_web/controllers/api_controller.ex
| 0.548794
| 0.4184
|
api_controller.ex
|
starcoder
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.