text stringlengths 2 99k | meta dict |
|---|---|
# go-memdb
Provides the `memdb` package that implements a simple in-memory database
built on immutable radix trees. The database provides Atomicity, Consistency
and Isolation from ACID. Being that it is in-memory, it does not provide durability.
The database is instantiated with a schema that specifies the tables and indicies
that exist and allows transactions to be executed.
The database provides the following:
* Multi-Version Concurrency Control (MVCC) - By leveraging immutable radix trees
the database is able to support any number of concurrent readers without locking,
and allows a writer to make progress.
* Transaction Support - The database allows for rich transactions, in which multiple
objects are inserted, updated or deleted. The transactions can span multiple tables,
and are applied atomically. The database provides atomicity and isolation in ACID
terminology, such that until commit the updates are not visible.
* Rich Indexing - Tables can support any number of indexes, which can be simple like
a single field index, or more advanced compound field indexes. Certain types like
UUID can be efficiently compressed from strings into byte indexes for reduces
storage requirements.
For the underlying immutable radix trees, see [go-immutable-radix](https://github.com/hashicorp/go-immutable-radix).
Documentation
=============
The full documentation is available on [Godoc](http://godoc.org/github.com/hashicorp/go-memdb).
Example
=======
Below is a simple example of usage
```go
// Create a sample struct
type Person struct {
Email string
Name string
Age int
}
// Create the DB schema
schema := &memdb.DBSchema{
Tables: map[string]*memdb.TableSchema{
"person": &memdb.TableSchema{
Name: "person",
Indexes: map[string]*memdb.IndexSchema{
"id": &memdb.IndexSchema{
Name: "id",
Unique: true,
Indexer: &memdb.StringFieldIndex{Field: "Email"},
},
},
},
},
}
// Create a new data base
db, err := memdb.NewMemDB(schema)
if err != nil {
panic(err)
}
// Create a write transaction
txn := db.Txn(true)
// Insert a new person
p := &Person{"joe@aol.com", "Joe", 30}
if err := txn.Insert("person", p); err != nil {
panic(err)
}
// Commit the transaction
txn.Commit()
// Create read-only transaction
txn = db.Txn(false)
defer txn.Abort()
// Lookup by email
raw, err := txn.First("person", "id", "joe@aol.com")
if err != nil {
panic(err)
}
// Say hi!
fmt.Printf("Hello %s!", raw.(*Person).Name)
```
| {
"pile_set_name": "Github"
} |
/**
* Copyright (c) Microsoft Corporation. All rights reserved.
* Licensed under the MIT License. See License.txt in the project root for
* license information.
*
* Code generated by Microsoft (R) AutoRest Code Generator.
*/
package com.microsoft.azure.management.network.v2020_03_01.implementation;
import com.microsoft.azure.management.network.v2020_03_01.OperationDisplay;
import com.microsoft.azure.management.network.v2020_03_01.OperationPropertiesFormatServiceSpecification;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.microsoft.rest.serializer.JsonFlatten;
/**
* Network REST API operation definition.
*/
@JsonFlatten
public class OperationInner {
/**
* Operation name: {provider}/{resource}/{operation}.
*/
@JsonProperty(value = "name")
private String name;
/**
* Display metadata associated with the operation.
*/
@JsonProperty(value = "display")
private OperationDisplay display;
/**
* Origin of the operation.
*/
@JsonProperty(value = "origin")
private String origin;
/**
* Specification of the service.
*/
@JsonProperty(value = "properties.serviceSpecification")
private OperationPropertiesFormatServiceSpecification serviceSpecification;
/**
* Get operation name: {provider}/{resource}/{operation}.
*
* @return the name value
*/
public String name() {
return this.name;
}
/**
* Set operation name: {provider}/{resource}/{operation}.
*
* @param name the name value to set
* @return the OperationInner object itself.
*/
public OperationInner withName(String name) {
this.name = name;
return this;
}
/**
* Get display metadata associated with the operation.
*
* @return the display value
*/
public OperationDisplay display() {
return this.display;
}
/**
* Set display metadata associated with the operation.
*
* @param display the display value to set
* @return the OperationInner object itself.
*/
public OperationInner withDisplay(OperationDisplay display) {
this.display = display;
return this;
}
/**
* Get origin of the operation.
*
* @return the origin value
*/
public String origin() {
return this.origin;
}
/**
* Set origin of the operation.
*
* @param origin the origin value to set
* @return the OperationInner object itself.
*/
public OperationInner withOrigin(String origin) {
this.origin = origin;
return this;
}
/**
* Get specification of the service.
*
* @return the serviceSpecification value
*/
public OperationPropertiesFormatServiceSpecification serviceSpecification() {
return this.serviceSpecification;
}
/**
* Set specification of the service.
*
* @param serviceSpecification the serviceSpecification value to set
* @return the OperationInner object itself.
*/
public OperationInner withServiceSpecification(OperationPropertiesFormatServiceSpecification serviceSpecification) {
this.serviceSpecification = serviceSpecification;
return this;
}
}
| {
"pile_set_name": "Github"
} |
[
"MONGOD",
"CONNECT",
"FIND",
"FIND PROJECT",
"INSERT",
"UPDATE",
"REMOVE",
"COUNT",
"AGGREGATE"
]
| {
"pile_set_name": "Github"
} |
####What is Codis?
Codis is a distributed redis service developed by wandoujia infrasstructure team, codis can be viewed as an redis server with infinite memory, have the ability of dynamically elastic scaling, it's more fit for storage business, if you need SUBPUB-like command, Codis is not supported, always remember Codis is a distributed storage system.
###Does Codis support etcd ?
Yes, please read the tutorial
####Can I use Codis directly in my existing services?
That depends.
Two cases:
1) Twemproxy users:
Yes, codis fully support twemproxy commands, further more, using redis-port tool, you can synchronization the data on twemproxy onto your Codis cluster.
2) Raw redis users:
That depends, if you use the following commands:
BGREWRITEAOF, BGSAVE, BITOP, BLPOP, BRPOP, BRPOPLPUSH, CLIENT, CONFIG, DBSIZE, DEBUG, DISCARD, EXEC, FLUSHALL, FLUSHDB, KEYS, LASTSAVE, MIGRATE, MONITOR, MOVE, MSETNX, MULTI, OBJECT, PSUBSCRIBE, PUBLISH, PUNSUBSCRIBE, RANDOMKEY, RENAME, RENAMENX, RESTORE, SAVE, SCAN, SCRIPT, SHUTDOWN, SLAVEOF, SLOTSCHECK, SLOTSDEL, SLOTSINFO, SLOTSMGRTONE, SLOTSMGRTSLOT, SLOTSMGRTTAGONE, SLOTSMGRTTAGSLOT, SLOWLOG, SUBSCRIBE, SYNC, TIME, UNSUBSCRIBE, UNWATCH, WATCH
you should modify your code, because Codis does not support these commands.
| {
"pile_set_name": "Github"
} |
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See the LICENSE.txt file in the project root
// for the license information.
void
test_grow1(void)
{
struct flex_item *root = flex_item_with_size(60, 240);
struct flex_item *child1 = flex_item_with_size(60, 30);
flex_item_set_grow(child1, 0);
flex_item_add(root, child1);
struct flex_item *child2 = flex_item_with_size(60, 0);
flex_item_set_grow(child2, 1);
flex_item_add(root, child2);
struct flex_item *child3 = flex_item_with_size(60, 0);
flex_item_set_grow(child3, 2);
flex_item_add(root, child3);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 60, 30);
TEST_FRAME_EQUAL(child2, 0, 30, 60, 70);
TEST_FRAME_EQUAL(child3, 0, 100, 60, 140);
flex_item_free(root);
}
void
test_grow2(void)
{
struct flex_item *root = flex_item_with_size(100, 100);
struct flex_item *child1 = flex_item_with_size(100, 20);
flex_item_set_grow(child1, 1);
flex_item_add(root, child1);
struct flex_item *child2 = flex_item_with_size(100, 20);
flex_item_set_grow(child2, 0);
flex_item_add(root, child2);
struct flex_item *child3 = flex_item_with_size(100, 20);
flex_item_add(root, child3);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 100, 60);
TEST_FRAME_EQUAL(child2, 0, 60, 100, 20);
TEST_FRAME_EQUAL(child3, 0, 80, 100, 20);
flex_item_free(root);
}
void
test_grow3(void)
{
// The grow attributes aren't taken into account when there is no flexible
// space available.
struct flex_item *root = flex_item_with_size(100, 100);
struct flex_item *child1 = flex_item_with_size(100, 50);
flex_item_set_grow(child1, 2);
flex_item_add(root, child1);
struct flex_item *child2 = flex_item_with_size(100, 50);
flex_item_set_grow(child2, 3);
flex_item_add(root, child2);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 100, 50);
TEST_FRAME_EQUAL(child2, 0, 50, 100, 50);
flex_item_free(root);
}
void
test_grow4(void)
{
// The grow attribute is not inherited from children.
struct flex_item *root = flex_item_with_size(100, 100);
flex_item_set_grow(root, 2);
struct flex_item *child1 = flex_item_with_size(100, 25);
flex_item_add(root, child1);
struct flex_item *child2 = flex_item_with_size(100, 25);
flex_item_add(root, child2);
TEST_EQUAL(flex_item_get_grow(child1), 0);
TEST_EQUAL(flex_item_get_grow(child2), 0);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 100, 25);
TEST_FRAME_EQUAL(child2, 0, 25, 100, 25);
flex_item_free(root);
}
void
test_grow5(void)
{
// All the container space is used when there is only one item with a
// positive value for the grow attribute.
struct flex_item *root = flex_item_with_size(100, 100);
struct flex_item *child1 = flex_item_with_size(100, 25);
flex_item_set_grow(child1, 1);
flex_item_add(root, child1);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 100, 100);
flex_item_free(root);
}
void
test_grow6(void)
{
struct flex_item *root = flex_item_with_size(100, 100);
struct flex_item *child1 = flex_item_with_size(100, 45);
flex_item_set_grow(child1, 1);
flex_item_add(root, child1);
struct flex_item *child2 = flex_item_with_size(100, 45);
flex_item_set_grow(child2, 1);
flex_item_add(root, child2);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 100, 50);
TEST_FRAME_EQUAL(child2, 0, 50, 100, 50);
flex_item_free(root);
}
void
test_grow7(void)
{
// Sizes of flexible items should be ignored when growing.
struct flex_item *root = flex_item_with_size(500, 600);
struct flex_item *child1 = flex_item_with_size(250, 0);
flex_item_set_grow(child1, 1);
flex_item_add(root, child1);
struct flex_item *child2 = flex_item_with_size(250, 50);
flex_item_set_grow(child2, 1);
flex_item_add(root, child2);
struct flex_item *child3 = flex_item_with_size(250, 0);
flex_item_add(root, child3);
struct flex_item *child4 = flex_item_with_size(250, 0);
flex_item_set_grow(child4, 1);
flex_item_add(root, child4);
struct flex_item *child5 = flex_item_with_size(250, 0);
flex_item_add(root, child5);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 250, 200);
TEST_FRAME_EQUAL(child2, 0, 200, 250, 200);
TEST_FRAME_EQUAL(child3, 0, 400, 250, 0);
TEST_FRAME_EQUAL(child4, 0, 400, 250, 200);
TEST_FRAME_EQUAL(child5, 0, 600, 250, 0);
flex_item_free(root);
}
void
test_grow8(void)
{
// Grow can be floating point.
struct flex_item *root = flex_item_with_size(100, 100);
struct flex_item *child1 = flex_item_with_size(100, 10);
flex_item_add(root, child1);
struct flex_item *child2 = flex_item_with_size(100, 20);
flex_item_set_grow(child2, 1);
flex_item_add(root, child2);
struct flex_item *child3 = flex_item_with_size(100, 20);
flex_item_set_grow(child3, 1.5);
flex_item_add(root, child3);
flex_layout(root);
TEST_FRAME_EQUAL(child1, 0, 0, 100, 10);
TEST_FRAME_EQUAL(child2, 0, 10, 100, 36);
TEST_FRAME_EQUAL(child3, 0, 46, 100, 54);
flex_item_free(root);
}
| {
"pile_set_name": "Github"
} |
package izumi.logstage.api
import scala.annotation.nowarn
import izumi.fundamentals.platform.language.SourceFilePosition
import izumi.logstage.api.Log._
import izumi.logstage.api.rendering.{LogstageCodec, RenderingOptions, StringRenderingPolicy}
import org.scalatest.wordspec.AnyWordSpec
import scala.util.Random
@nowarn("msg=[Ee]xpression.*logger")
class BasicLoggingTest extends AnyWordSpec {
"Argument extraction macro" should {
"extract argument names from an arbitrary string" in {
val arg1 = 1
val arg2 = "argument 2"
val message = Message(s"argument1: $arg1, argument2: $arg2, argument2 again: $arg2, expression ${2 + 2}, ${2 + 2}")
val expectation = List(
LogArg(Seq("arg1"), 1, hiddenName = false, Some(LogstageCodec.LogstageCodecInt)),
LogArg(Seq("arg2"), "argument 2", hiddenName = false, Some(LogstageCodec.LogstageCodecString)),
LogArg(Seq("arg2"), "argument 2", hiddenName = false, Some(LogstageCodec.LogstageCodecString)),
LogArg(Seq("UNNAMED:4"), 4, hiddenName = false, Some(LogstageCodec.LogstageCodecInt)),
LogArg(Seq("UNNAMED:4"), 4, hiddenName = false, Some(LogstageCodec.LogstageCodecInt)),
)
val expectedParts = List("argument1: ", ", argument2: ", ", argument2 again: ", ", expression ", ", ", "")
assert(message.args == expectation)
assert(message.template.parts == expectedParts)
val message1 = Message(s"expression: ${Random.self.nextInt() + 1}")
assert(message1.args.head.name == "EXPRESSION:scala.util.Random.self.nextInt().+(1)")
assert(message1.template.parts == List("expression: ", ""))
}
"support .stripMargin" in {
val m = "M E S S A G E"
val message1 = Message {
s"""This
|is a
|multiline ${m -> "message"}""".stripMargin
}
assert(message1.template.parts.toList == List("This\nis a\nmultiline ", ""))
assert(message1.args == List(LogArg(Seq("message"), m, hiddenName = false, Some(LogstageCodec.LogstageCodecString))))
val message2 = Message("single line with stripMargin".stripMargin)
assert(message2.template.parts.toList == List("single line with stripMargin"))
assert(message2.args == List.empty)
val message3 = Message {
"""Hello
|there!
|""".stripMargin
}
assert(message3.template.parts.toList == List("Hello\nthere!\n"))
assert(message3.args == List.empty)
}
}
"String rendering policy" should {
"not fail on unbalanced messages" in {
val p = new StringRenderingPolicy(RenderingOptions.default.copy(colored = false), None)
val rendered =
render(p, Message(StringContext("begin ", " end"), Seq(LogArg(Seq("[a1]"), 1, hiddenName = false, None), LogArg(Seq("[a2]"), 2, hiddenName = false, None))))
assert(rendered.endsWith("begin [a_1]=1 end {{ [a_2]=2 }}"))
}
}
"logstage" should {
"allow constructing Log.Message" in {
val i = 5
val s = "hi"
val msg = Message(s"begin $i $s end")
assert(
msg == Message(
StringContext("begin ", " ", " end"),
Seq(
LogArg(Seq("i"), 5, hiddenName = false, Some(LogstageCodec.LogstageCodecInt)),
LogArg(Seq("s"), "hi", hiddenName = false, Some(LogstageCodec.LogstageCodecString)),
),
)
)
}
"allow concatenating Log.Message" should {
"multiple parts" in {
val msg1 = Message(s"begin1${1.1}middle1${1.2}end1")
val msg2 = Message(s"begin2 ${2.1} middle2 ${2.2} end2 ")
val msg3 = Message(s" begin3${3.1}middle3 ${3.2}end3")
val msgConcatenated = msg1 + msg2 + msg3
assert(
msgConcatenated.template.parts == Seq(
"begin1",
"middle1",
"end1begin2 ",
" middle2 ",
" end2 begin3",
"middle3 ",
"end3",
)
)
assert(msgConcatenated.args.map(_.value) == Seq(1.1, 1.2, 2.1, 2.2, 3.1, 3.2))
}
"one part" in {
val msg1 = Message(s"begin1")
val msg2 = Message(s"${2}")
val msg3 = Message(s"end3")
val msgConcatenated = msg1 + msg2 + msg3
assert(
msgConcatenated.template.parts == Seq(
"begin1",
"end3",
)
)
}
"zero parts" in {
val msgOnePart = Message(s"onePart")
val msgZeroParts = Message("")
val msgEmpty = Message.empty
assert(
(msgOnePart + msgZeroParts).template.parts == Seq(
"onePart"
)
)
assert(
(msgZeroParts + msgOnePart).template.parts == Seq(
"onePart"
)
)
assert(
(msgOnePart + msgEmpty).template.parts == Seq(
"onePart"
)
)
assert(
(msgEmpty + msgOnePart).template.parts == Seq(
"onePart"
)
)
assert((msgEmpty + msgEmpty).template.parts == Seq(""))
assert((msgZeroParts + msgZeroParts).template.parts == Seq(""))
assert((msgEmpty + msgZeroParts).template.parts == Seq(""))
assert((msgZeroParts + msgEmpty).template.parts == Seq(""))
}
"empty StringContext" in {
val msgEmptyStringContext = Message(StringContext(), Nil)
val msgOnePart = Message(s"onePart")
assert(
(msgOnePart + msgEmptyStringContext).template.parts == Seq(
"onePart"
)
)
assert(
(msgEmptyStringContext + msgOnePart).template.parts == Seq(
"onePart"
)
)
}
}
}
private def render(p: StringRenderingPolicy, m: Message) = {
p.render(
Entry(
m,
Context(
StaticExtendedContext(LoggerId("test"), SourceFilePosition("test.scala", 0)),
DynamicContext(Level.Warn, ThreadData("test", 0), 0),
CustomContext(Seq.empty),
),
)
)
}
}
| {
"pile_set_name": "Github"
} |
/* 7zBuf.h -- Byte Buffer
2017-04-03 : Igor Pavlov : Public domain */
#ifndef __7Z_BUF_H
#define __7Z_BUF_H
#include "7zTypes.h"
EXTERN_C_BEGIN
typedef struct
{
Byte *data;
size_t size;
} CBuf;
void Buf_Init(CBuf *p);
int Buf_Create(CBuf *p, size_t size, ISzAllocPtr alloc);
void Buf_Free(CBuf *p, ISzAllocPtr alloc);
typedef struct
{
Byte *data;
size_t size;
size_t pos;
} CDynBuf;
void DynBuf_Construct(CDynBuf *p);
void DynBuf_SeekToBeg(CDynBuf *p);
int DynBuf_Write(CDynBuf *p, const Byte *buf, size_t size, ISzAllocPtr alloc);
void DynBuf_Free(CDynBuf *p, ISzAllocPtr alloc);
EXTERN_C_END
#endif
| {
"pile_set_name": "Github"
} |
// Copyright 2016 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef NET_QUIC_TEST_TOOLS_MOCK_QUIC_CLIENT_PROMISED_INFO_H_
#define NET_QUIC_TEST_TOOLS_MOCK_QUIC_CLIENT_PROMISED_INFO_H_
#include <string>
#include "base/macros.h"
#include "net/quic/core/quic_client_promised_info.h"
#include "net/quic/core/quic_protocol.h"
#include "testing/gmock/include/gmock/gmock.h"
namespace net {
namespace test {
class MockQuicClientPromisedInfo : public QuicClientPromisedInfo {
public:
MockQuicClientPromisedInfo(QuicClientSessionBase* session,
QuicStreamId id,
std::string url);
~MockQuicClientPromisedInfo() override;
MOCK_METHOD2(HandleClientRequest,
QuicAsyncStatus(const SpdyHeaderBlock& headers,
QuicClientPushPromiseIndex::Delegate* delegate));
};
} // namespace test
} // namespace net
#endif // NET_QUIC_TEST_TOOLS_MOCK_QUIC_CLIENT_PROMISED_INFO_H_
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
* All rights reserved
* www.brocade.com
*
* Linux driver for Brocade Fibre Channel Host Bus Adapter.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License (GPL) Version 2 as
* published by the Free Software Foundation
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include "bfad_drv.h"
#include "bfa_ioc.h"
#include "bfi_cbreg.h"
#include "bfa_defs.h"
BFA_TRC_FILE(CNA, IOC_CB);
/*
* forward declarations
*/
static bfa_boolean_t bfa_ioc_cb_firmware_lock(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_firmware_unlock(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_reg_init(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_map_port(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_isr_mode_set(struct bfa_ioc_s *ioc, bfa_boolean_t msix);
static void bfa_ioc_cb_notify_fail(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_ownership_reset(struct bfa_ioc_s *ioc);
static bfa_boolean_t bfa_ioc_cb_sync_start(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_sync_join(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_sync_leave(struct bfa_ioc_s *ioc);
static void bfa_ioc_cb_sync_ack(struct bfa_ioc_s *ioc);
static bfa_boolean_t bfa_ioc_cb_sync_complete(struct bfa_ioc_s *ioc);
static struct bfa_ioc_hwif_s hwif_cb;
/*
* Called from bfa_ioc_attach() to map asic specific calls.
*/
void
bfa_ioc_set_cb_hwif(struct bfa_ioc_s *ioc)
{
hwif_cb.ioc_pll_init = bfa_ioc_cb_pll_init;
hwif_cb.ioc_firmware_lock = bfa_ioc_cb_firmware_lock;
hwif_cb.ioc_firmware_unlock = bfa_ioc_cb_firmware_unlock;
hwif_cb.ioc_reg_init = bfa_ioc_cb_reg_init;
hwif_cb.ioc_map_port = bfa_ioc_cb_map_port;
hwif_cb.ioc_isr_mode_set = bfa_ioc_cb_isr_mode_set;
hwif_cb.ioc_notify_fail = bfa_ioc_cb_notify_fail;
hwif_cb.ioc_ownership_reset = bfa_ioc_cb_ownership_reset;
hwif_cb.ioc_sync_start = bfa_ioc_cb_sync_start;
hwif_cb.ioc_sync_join = bfa_ioc_cb_sync_join;
hwif_cb.ioc_sync_leave = bfa_ioc_cb_sync_leave;
hwif_cb.ioc_sync_ack = bfa_ioc_cb_sync_ack;
hwif_cb.ioc_sync_complete = bfa_ioc_cb_sync_complete;
ioc->ioc_hwif = &hwif_cb;
}
/*
* Return true if firmware of current driver matches the running firmware.
*/
static bfa_boolean_t
bfa_ioc_cb_firmware_lock(struct bfa_ioc_s *ioc)
{
struct bfi_ioc_image_hdr_s fwhdr;
uint32_t fwstate = readl(ioc->ioc_regs.ioc_fwstate);
if (fwstate == BFI_IOC_UNINIT)
return BFA_TRUE;
bfa_ioc_fwver_get(ioc, &fwhdr);
if (swab32(fwhdr.exec) == BFI_BOOT_TYPE_NORMAL)
return BFA_TRUE;
bfa_trc(ioc, fwstate);
bfa_trc(ioc, fwhdr.exec);
writel(BFI_IOC_UNINIT, ioc->ioc_regs.ioc_fwstate);
return BFA_TRUE;
}
static void
bfa_ioc_cb_firmware_unlock(struct bfa_ioc_s *ioc)
{
}
/*
* Notify other functions on HB failure.
*/
static void
bfa_ioc_cb_notify_fail(struct bfa_ioc_s *ioc)
{
writel(__PSS_ERR_STATUS_SET, ioc->ioc_regs.err_set);
readl(ioc->ioc_regs.err_set);
}
/*
* Host to LPU mailbox message addresses
*/
static struct { u32 hfn_mbox, lpu_mbox, hfn_pgn; } iocreg_fnreg[] = {
{ HOSTFN0_LPU_MBOX0_0, LPU_HOSTFN0_MBOX0_0, HOST_PAGE_NUM_FN0 },
{ HOSTFN1_LPU_MBOX0_8, LPU_HOSTFN1_MBOX0_8, HOST_PAGE_NUM_FN1 }
};
/*
* Host <-> LPU mailbox command/status registers
*/
static struct { u32 hfn, lpu; } iocreg_mbcmd[] = {
{ HOSTFN0_LPU0_CMD_STAT, LPU0_HOSTFN0_CMD_STAT },
{ HOSTFN1_LPU1_CMD_STAT, LPU1_HOSTFN1_CMD_STAT }
};
static void
bfa_ioc_cb_reg_init(struct bfa_ioc_s *ioc)
{
void __iomem *rb;
int pcifn = bfa_ioc_pcifn(ioc);
rb = bfa_ioc_bar0(ioc);
ioc->ioc_regs.hfn_mbox = rb + iocreg_fnreg[pcifn].hfn_mbox;
ioc->ioc_regs.lpu_mbox = rb + iocreg_fnreg[pcifn].lpu_mbox;
ioc->ioc_regs.host_page_num_fn = rb + iocreg_fnreg[pcifn].hfn_pgn;
if (ioc->port_id == 0) {
ioc->ioc_regs.heartbeat = rb + BFA_IOC0_HBEAT_REG;
ioc->ioc_regs.ioc_fwstate = rb + BFA_IOC0_STATE_REG;
ioc->ioc_regs.alt_ioc_fwstate = rb + BFA_IOC1_STATE_REG;
} else {
ioc->ioc_regs.heartbeat = (rb + BFA_IOC1_HBEAT_REG);
ioc->ioc_regs.ioc_fwstate = (rb + BFA_IOC1_STATE_REG);
ioc->ioc_regs.alt_ioc_fwstate = (rb + BFA_IOC0_STATE_REG);
}
/*
* Host <-> LPU mailbox command/status registers
*/
ioc->ioc_regs.hfn_mbox_cmd = rb + iocreg_mbcmd[pcifn].hfn;
ioc->ioc_regs.lpu_mbox_cmd = rb + iocreg_mbcmd[pcifn].lpu;
/*
* PSS control registers
*/
ioc->ioc_regs.pss_ctl_reg = (rb + PSS_CTL_REG);
ioc->ioc_regs.pss_err_status_reg = (rb + PSS_ERR_STATUS_REG);
ioc->ioc_regs.app_pll_fast_ctl_reg = (rb + APP_PLL_400_CTL_REG);
ioc->ioc_regs.app_pll_slow_ctl_reg = (rb + APP_PLL_212_CTL_REG);
/*
* IOC semaphore registers and serialization
*/
ioc->ioc_regs.ioc_sem_reg = (rb + HOST_SEM0_REG);
ioc->ioc_regs.ioc_init_sem_reg = (rb + HOST_SEM2_REG);
/*
* sram memory access
*/
ioc->ioc_regs.smem_page_start = (rb + PSS_SMEM_PAGE_START);
ioc->ioc_regs.smem_pg0 = BFI_IOC_SMEM_PG0_CB;
/*
* err set reg : for notification of hb failure
*/
ioc->ioc_regs.err_set = (rb + ERR_SET_REG);
}
/*
* Initialize IOC to port mapping.
*/
static void
bfa_ioc_cb_map_port(struct bfa_ioc_s *ioc)
{
/*
* For crossbow, port id is same as pci function.
*/
ioc->port_id = bfa_ioc_pcifn(ioc);
bfa_trc(ioc, ioc->port_id);
}
/*
* Set interrupt mode for a function: INTX or MSIX
*/
static void
bfa_ioc_cb_isr_mode_set(struct bfa_ioc_s *ioc, bfa_boolean_t msix)
{
}
/*
* Synchronized IOC failure processing routines
*/
static bfa_boolean_t
bfa_ioc_cb_sync_start(struct bfa_ioc_s *ioc)
{
return bfa_ioc_cb_sync_complete(ioc);
}
/*
* Cleanup hw semaphore and usecnt registers
*/
static void
bfa_ioc_cb_ownership_reset(struct bfa_ioc_s *ioc)
{
/*
* Read the hw sem reg to make sure that it is locked
* before we clear it. If it is not locked, writing 1
* will lock it instead of clearing it.
*/
readl(ioc->ioc_regs.ioc_sem_reg);
writel(1, ioc->ioc_regs.ioc_sem_reg);
}
/*
* Synchronized IOC failure processing routines
*/
static void
bfa_ioc_cb_sync_join(struct bfa_ioc_s *ioc)
{
}
static void
bfa_ioc_cb_sync_leave(struct bfa_ioc_s *ioc)
{
}
static void
bfa_ioc_cb_sync_ack(struct bfa_ioc_s *ioc)
{
writel(BFI_IOC_FAIL, ioc->ioc_regs.ioc_fwstate);
}
static bfa_boolean_t
bfa_ioc_cb_sync_complete(struct bfa_ioc_s *ioc)
{
uint32_t fwstate, alt_fwstate;
fwstate = readl(ioc->ioc_regs.ioc_fwstate);
/*
* At this point, this IOC is hoding the hw sem in the
* start path (fwcheck) OR in the disable/enable path
* OR to check if the other IOC has acknowledged failure.
*
* So, this IOC can be in UNINIT, INITING, DISABLED, FAIL
* or in MEMTEST states. In a normal scenario, this IOC
* can not be in OP state when this function is called.
*
* However, this IOC could still be in OP state when
* the OS driver is starting up, if the OptROM code has
* left it in that state.
*
* If we had marked this IOC's fwstate as BFI_IOC_FAIL
* in the failure case and now, if the fwstate is not
* BFI_IOC_FAIL it implies that the other PCI fn have
* reinitialized the ASIC or this IOC got disabled, so
* return TRUE.
*/
if (fwstate == BFI_IOC_UNINIT ||
fwstate == BFI_IOC_INITING ||
fwstate == BFI_IOC_DISABLED ||
fwstate == BFI_IOC_MEMTEST ||
fwstate == BFI_IOC_OP)
return BFA_TRUE;
else {
alt_fwstate = readl(ioc->ioc_regs.alt_ioc_fwstate);
if (alt_fwstate == BFI_IOC_FAIL ||
alt_fwstate == BFI_IOC_DISABLED ||
alt_fwstate == BFI_IOC_UNINIT ||
alt_fwstate == BFI_IOC_INITING ||
alt_fwstate == BFI_IOC_MEMTEST)
return BFA_TRUE;
else
return BFA_FALSE;
}
}
bfa_status_t
bfa_ioc_cb_pll_init(void __iomem *rb, bfa_boolean_t fcmode)
{
u32 pll_sclk, pll_fclk;
pll_sclk = __APP_PLL_212_ENABLE | __APP_PLL_212_LRESETN |
__APP_PLL_212_P0_1(3U) |
__APP_PLL_212_JITLMT0_1(3U) |
__APP_PLL_212_CNTLMT0_1(3U);
pll_fclk = __APP_PLL_400_ENABLE | __APP_PLL_400_LRESETN |
__APP_PLL_400_RSEL200500 | __APP_PLL_400_P0_1(3U) |
__APP_PLL_400_JITLMT0_1(3U) |
__APP_PLL_400_CNTLMT0_1(3U);
writel(BFI_IOC_UNINIT, (rb + BFA_IOC0_STATE_REG));
writel(BFI_IOC_UNINIT, (rb + BFA_IOC1_STATE_REG));
writel(0xffffffffU, (rb + HOSTFN0_INT_MSK));
writel(0xffffffffU, (rb + HOSTFN1_INT_MSK));
writel(0xffffffffU, (rb + HOSTFN0_INT_STATUS));
writel(0xffffffffU, (rb + HOSTFN1_INT_STATUS));
writel(0xffffffffU, (rb + HOSTFN0_INT_MSK));
writel(0xffffffffU, (rb + HOSTFN1_INT_MSK));
writel(__APP_PLL_212_LOGIC_SOFT_RESET, rb + APP_PLL_212_CTL_REG);
writel(__APP_PLL_212_BYPASS | __APP_PLL_212_LOGIC_SOFT_RESET,
rb + APP_PLL_212_CTL_REG);
writel(__APP_PLL_400_LOGIC_SOFT_RESET, rb + APP_PLL_400_CTL_REG);
writel(__APP_PLL_400_BYPASS | __APP_PLL_400_LOGIC_SOFT_RESET,
rb + APP_PLL_400_CTL_REG);
udelay(2);
writel(__APP_PLL_212_LOGIC_SOFT_RESET, rb + APP_PLL_212_CTL_REG);
writel(__APP_PLL_400_LOGIC_SOFT_RESET, rb + APP_PLL_400_CTL_REG);
writel(pll_sclk | __APP_PLL_212_LOGIC_SOFT_RESET,
rb + APP_PLL_212_CTL_REG);
writel(pll_fclk | __APP_PLL_400_LOGIC_SOFT_RESET,
rb + APP_PLL_400_CTL_REG);
udelay(2000);
writel(0xffffffffU, (rb + HOSTFN0_INT_STATUS));
writel(0xffffffffU, (rb + HOSTFN1_INT_STATUS));
writel(pll_sclk, (rb + APP_PLL_212_CTL_REG));
writel(pll_fclk, (rb + APP_PLL_400_CTL_REG));
return BFA_STATUS_OK;
}
| {
"pile_set_name": "Github"
} |
// RUN: %clang_cc1 -emit-llvm %s -o /dev/null
typedef union {
long (*ap)[4];
} ptrs;
void DoAssignIteration() {
ptrs abase;
abase.ap+=27;
Assignment(*abase.ap);
}
| {
"pile_set_name": "Github"
} |
#include "search/region_info_getter.hpp"
#include "storage/country_decl.hpp"
#include "base/logging.hpp"
#include "base/stl_helpers.hpp"
#include "base/string_utils.hpp"
#include <cstddef>
using namespace std;
using namespace storage;
namespace search
{
namespace
{
// Calls |fn| on each node name on the way from |id| to the root of
// the |countries| tree, except the root. Does nothing if there are
// multiple ways from |id| to the |root|.
template <typename Fn>
void GetPathToRoot(storage::CountryId const & id, storage::CountryTree const & countries, Fn && fn)
{
vector<storage::CountryTree::Node const *> nodes;
countries.Find(id, nodes);
if (nodes.empty())
LOG(LWARNING, ("Can't find node in the countries tree for:", id));
if (nodes.size() != 1 || nodes[0]->IsRoot())
return;
auto const * cur = nodes[0];
do
{
fn(cur->Value().Name());
cur = &cur->Parent();
} while (!cur->IsRoot());
}
} // namespace
void RegionInfoGetter::LoadCountriesTree()
{
storage::Affiliations affiliations;
storage::CountryNameSynonyms countryNameSynonyms;
storage::MwmTopCityGeoIds mwmTopCityGeoIds;
storage::MwmTopCountryGeoIds mwmTopCountryGeoIds;
storage::LoadCountriesFromFile(COUNTRIES_FILE, m_countries, affiliations, countryNameSynonyms,
mwmTopCityGeoIds, mwmTopCountryGeoIds);
}
void RegionInfoGetter::SetLocale(string const & locale)
{
m_nameGetter = platform::GetTextByIdFactory(platform::TextSource::Countries, locale);
}
void RegionInfoGetter::GetLocalizedFullName(storage::CountryId const & id,
vector<string> & nameParts) const
{
size_t const kMaxNumParts = 2;
GetPathToRoot(id, m_countries, [&](storage::CountryId const & id) {
nameParts.push_back(GetLocalizedCountryName(id));
});
if (nameParts.size() > kMaxNumParts)
nameParts.erase(nameParts.begin(), nameParts.end() - kMaxNumParts);
base::EraseIf(nameParts, [&](string const & s) { return s.empty(); });
if (!nameParts.empty())
return;
// Tries to get at least localized name for |id|, if |id| is a
// disputed territory.
auto name = GetLocalizedCountryName(id);
if (!name.empty())
{
nameParts.push_back(name);
return;
}
// Tries to transform map name to the full name.
name = id;
storage::CountryInfo::FileName2FullName(name);
if (!name.empty())
nameParts.push_back(name);
}
string RegionInfoGetter::GetLocalizedFullName(storage::CountryId const & id) const
{
vector<string> parts;
GetLocalizedFullName(id, parts);
return strings::JoinStrings(parts, ", ");
}
string RegionInfoGetter::GetLocalizedCountryName(storage::CountryId const & id) const
{
if (!m_nameGetter)
return {};
auto const shortName = (*m_nameGetter)(id + " Short");
if (!shortName.empty())
return shortName;
auto const officialName = (*m_nameGetter)(id);
if (!officialName.empty())
return officialName;
return {};
}
} // namespace search
| {
"pile_set_name": "Github"
} |
/**********************************************************************
* File: tessedit.h (Formerly tessedit.h)
* Description: Main program for merge of tess and editor.
* Author: Ray Smith
* Created: Tue Jan 07 15:21:46 GMT 1992
*
* (C) Copyright 1992, Hewlett-Packard Ltd.
** Licensed under the Apache License, Version 2.0 (the "License");
** you may not use this file except in compliance with the License.
** You may obtain a copy of the License at
** http://www.apache.org/licenses/LICENSE-2.0
** Unless required by applicable law or agreed to in writing, software
** distributed under the License is distributed on an "AS IS" BASIS,
** WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
** See the License for the specific language governing permissions and
** limitations under the License.
*
**********************************************************************/
#ifndef TESSEDIT_H
#define TESSEDIT_H
#include "blobs.h"
#include "pgedit.h"
#include "notdll.h"
//progress monitor
extern ETEXT_DESC *global_monitor;
#endif
| {
"pile_set_name": "Github"
} |
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// This program generates the trie for width operations. The generated table
// includes width category information as well as the normalization mappings.
package main
import (
"bytes"
"fmt"
"io"
"log"
"math"
"unicode/utf8"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/triegen"
)
// See gen_common.go for flags.
func main() {
gen.Init()
genTables()
genTests()
gen.Repackage("gen_trieval.go", "trieval.go", "width")
gen.Repackage("gen_common.go", "common_test.go", "width")
}
func genTables() {
t := triegen.NewTrie("width")
// fold and inverse mappings. See mapComment for a description of the format
// of each entry. Add dummy value to make an index of 0 mean no mapping.
inverse := [][4]byte{{}}
mapping := map[[4]byte]int{[4]byte{}: 0}
getWidthData(func(r rune, tag elem, alt rune) {
idx := 0
if alt != 0 {
var buf [4]byte
buf[0] = byte(utf8.EncodeRune(buf[1:], alt))
s := string(r)
buf[buf[0]] ^= s[len(s)-1]
var ok bool
if idx, ok = mapping[buf]; !ok {
idx = len(mapping)
if idx > math.MaxUint8 {
log.Fatalf("Index %d does not fit in a byte.", idx)
}
mapping[buf] = idx
inverse = append(inverse, buf)
}
}
t.Insert(r, uint64(tag|elem(idx)))
})
w := &bytes.Buffer{}
gen.WriteUnicodeVersion(w)
sz, err := t.Gen(w)
if err != nil {
log.Fatal(err)
}
sz += writeMappings(w, inverse)
fmt.Fprintf(w, "// Total table size %d bytes (%dKiB)\n", sz, sz/1024)
gen.WriteGoFile(*outputFile, "width", w.Bytes())
}
const inverseDataComment = `
// inverseData contains 4-byte entries of the following format:
// <length> <modified UTF-8-encoded rune> <0 padding>
// The last byte of the UTF-8-encoded rune is xor-ed with the last byte of the
// UTF-8 encoding of the original rune. Mappings often have the following
// pattern:
// A -> A (U+FF21 -> U+0041)
// B -> B (U+FF22 -> U+0042)
// ...
// By xor-ing the last byte the same entry can be shared by many mappings. This
// reduces the total number of distinct entries by about two thirds.
// The resulting entry for the aforementioned mappings is
// { 0x01, 0xE0, 0x00, 0x00 }
// Using this entry to map U+FF21 (UTF-8 [EF BC A1]), we get
// E0 ^ A1 = 41.
// Similarly, for U+FF22 (UTF-8 [EF BC A2]), we get
// E0 ^ A2 = 42.
// Note that because of the xor-ing, the byte sequence stored in the entry is
// not valid UTF-8.`
func writeMappings(w io.Writer, data [][4]byte) int {
fmt.Fprintln(w, inverseDataComment)
fmt.Fprintf(w, "var inverseData = [%d][4]byte{\n", len(data))
for _, x := range data {
fmt.Fprintf(w, "{ 0x%02x, 0x%02x, 0x%02x, 0x%02x },\n", x[0], x[1], x[2], x[3])
}
fmt.Fprintln(w, "}")
return len(data) * 4
}
func genTests() {
w := &bytes.Buffer{}
fmt.Fprintf(w, "\nvar mapRunes = map[rune]struct{r rune; e elem}{\n")
getWidthData(func(r rune, tag elem, alt rune) {
if alt != 0 {
fmt.Fprintf(w, "\t0x%X: {0x%X, 0x%X},\n", r, alt, tag)
}
})
fmt.Fprintln(w, "}")
gen.WriteGoFile("runes_test.go", "width", w.Bytes())
}
| {
"pile_set_name": "Github"
} |
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -inMemory -port 8222
| {
"pile_set_name": "Github"
} |
/*
Copyright 2008 Intel Corporation
Use, modification and distribution are subject to the Boost Software License,
Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
*/
#ifndef BOOST_POLYGON_INTERVAL_DATA_HPP
#define BOOST_POLYGON_INTERVAL_DATA_HPP
#include "isotropy.hpp"
namespace boost { namespace polygon{
template <typename T>
class interval_data {
public:
typedef T coordinate_type;
inline interval_data()
#ifndef BOOST_POLYGON_MSVC
:coords_()
#endif
{}
inline interval_data(coordinate_type low, coordinate_type high)
#ifndef BOOST_POLYGON_MSVC
:coords_()
#endif
{
coords_[LOW] = low; coords_[HIGH] = high;
}
inline interval_data(const interval_data& that)
#ifndef BOOST_POLYGON_MSVC
:coords_()
#endif
{
(*this) = that;
}
inline interval_data& operator=(const interval_data& that) {
coords_[0] = that.coords_[0]; coords_[1] = that.coords_[1]; return *this;
}
template <typename T2>
inline interval_data& operator=(const T2& rvalue);
inline coordinate_type get(direction_1d dir) const {
return coords_[dir.to_int()];
}
inline coordinate_type low() const { return coords_[0]; }
inline coordinate_type high() const { return coords_[1]; }
inline bool operator==(const interval_data& that) const {
return low() == that.low() && high() == that.high(); }
inline bool operator!=(const interval_data& that) const {
return low() != that.low() || high() != that.high(); }
inline bool operator<(const interval_data& that) const {
if(coords_[0] < that.coords_[0]) return true;
if(coords_[0] > that.coords_[0]) return false;
if(coords_[1] < that.coords_[1]) return true;
return false;
}
inline bool operator<=(const interval_data& that) const { return !(that < *this); }
inline bool operator>(const interval_data& that) const { return that < *this; }
inline bool operator>=(const interval_data& that) const { return !((*this) < that); }
inline void set(direction_1d dir, coordinate_type value) {
coords_[dir.to_int()] = value;
}
private:
coordinate_type coords_[2];
};
}
}
#endif
| {
"pile_set_name": "Github"
} |
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: metering-ocp-dev
namespace: openshift-metering
spec:
configMap: metering-ocp
sourceType: internal
publisher: metering
displayName: Metering Operator Testing
| {
"pile_set_name": "Github"
} |
// a link that only has an underline when you hover over it
@mixin hover-link {
text-decoration: none;
&:hover {
text-decoration: underline; } }
| {
"pile_set_name": "Github"
} |
#include "graphviz_encode.h"
#include <boost/test/unit_test.hpp>
#include "graphviz_decode.h"
BOOST_AUTO_TEST_CASE(graphviz_encode_thorough)
{
// Graphviz encoding
{
for (const auto s : {
"A", "ABCDEFGHIJKLMN", "A B", " A B ", " A B ", // Spaces
"A\"B", "\"A\"B\"", "\"\"A\"\"B\"\"", // Quotes
"A\\B", "\\A\\B\\", "\\\\A\\\\B\\\\", // Backslash
"A,B", ",A,B,", ",,A,,B,," // Comma
}) {
const auto t = graphviz_encode(s);
const auto u = graphviz_decode(t);
BOOST_CHECK(s == u);
}
}
}
| {
"pile_set_name": "Github"
} |
<Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
x:Class="Routing.App"
>
<Application.Resources>
</Application.Resources>
</Application>
| {
"pile_set_name": "Github"
} |
//
// Base styles
//
.alert {
position: relative;
padding: $alert-padding-y $alert-padding-x;
margin-bottom: $alert-margin-bottom;
border: $alert-border-width solid transparent;
@include border-radius($alert-border-radius);
}
// Headings for larger alerts
.alert-heading {
// Specified to prevent conflicts of changing $headings-color
color: inherit;
}
// Provide class for links that match alerts
.alert-link {
font-weight: $alert-link-font-weight;
}
// Dismissible alerts
//
// Expand the right padding and account for the close button's positioning.
.alert-dismissible {
padding-right: $close-font-size + $alert-padding-x * 2;
// Adjust close link position
.close {
position: absolute;
top: 0;
right: 0;
padding: $alert-padding-y $alert-padding-x;
color: inherit;
}
}
// Alternate styles
//
// Generate contextual modifier classes for colorizing the alert.
@each $color, $value in $theme-colors {
.alert-#{$color} {
@include alert-variant(theme-color-level($color, $alert-bg-level), theme-color-level($color, $alert-border-level), theme-color-level($color, $alert-color-level));
}
}
| {
"pile_set_name": "Github"
} |
--Taken from SQL Server Central question of the day, Feb 11 2009:
-- http://www.sqlservercentral.com/questions/T-SQL/65712/
--Discussion here:
-- http://www.sqlservercentral.com/Forums/Topic654391-1181-1.aspx
-- (interestingly, most online formatting tools get this wrong - afaik GuDu is the only
-- other one that doesn't. Query analyser didn't get this right either, but SSMS does.)
--
PRINT '1' -- /* ;PRINT '2' */ ;PRINT '3' /*
PRINT '4' --*/
--/*
PRINT '5'
--*/
/*
PRINT '6'
--/* point here is that 7 is still commented, because T-SQL supports nested multiline comments.
*/
PRINT '7'
--*/
PRINT '8'
| {
"pile_set_name": "Github"
} |
// ArduinoJson - arduinojson.org
// Copyright Benoit Blanchon 2014-2020
// MIT License
#include <ArduinoJson.h>
#include <catch.hpp>
static void check(const JsonArray array, const char* expected_data,
size_t expected_len) {
std::string expected(expected_data, expected_data + expected_len);
std::string actual;
size_t len = serializeMsgPack(array, actual);
CAPTURE(array);
REQUIRE(len == expected_len);
REQUIRE(actual == expected);
}
template <size_t N>
static void check(const JsonArray array, const char (&expected_data)[N]) {
const size_t expected_len = N - 1;
check(array, expected_data, expected_len);
}
static void check(const JsonArray array, const std::string& expected) {
check(array, expected.data(), expected.length());
}
TEST_CASE("serialize MsgPack array") {
DynamicJsonDocument doc(JSON_ARRAY_SIZE(65536));
JsonArray array = doc.to<JsonArray>();
SECTION("empty") {
check(array, "\x90");
}
SECTION("fixarray") {
array.add("hello");
array.add("world");
check(array, "\x92\xA5hello\xA5world");
}
SECTION("array 16") {
for (int i = 0; i < 16; i++) array.add(i);
check(array,
"\xDC\x00\x10\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D"
"\x0E\x0F");
}
SECTION("array 32") {
const char* nil = 0;
for (int i = 0; i < 65536; i++) array.add(nil);
check(array,
std::string("\xDD\x00\x01\x00\x00", 5) + std::string(65536, '\xc0'));
}
}
| {
"pile_set_name": "Github"
} |
.so ../bk-macros
.\" help://hardlink
.TH "bk relink" "\*[BKVER]" %E% "\*(BC" "\*(UM"
.SH NAME
bk relink \- recreate broken hard links
.SH SYNOPSIS
.B bk relink
.[B] \-q
.ARG from
.[ARG] from2\ .\|.\|.
.ARG to
.br
.B bk relink
.[B] \-q
.SH DESCRIPTION
The relink command is used to conserve disk space. It is typical for
a single user to have many repositories, each one representing a different
work in progress. It is also typical to use the
.Q \-l
option to
.B bk clone
to create hard-linked repositories.
A hard-linked repository uses much less space than a copied repository.
As files are modified, the links are broken.
As the same set of changes come into a set of repositories, the links
could be restored.
That is what the relink command does.
.LP
The relink command looks at each \*(BK file in the
.ARG from
repository and if it is the same as the same file in the
.ARG to
repository, it replaces the file in the
.ARG from
repository
with a hard link to the file in the
.ARG to
repository.
.LP
If no repositories are specified, then
.ARG from
defaults to the current repository and
.ARG to
defaults to all parent[s] of the current repository.
.SH OPTIONS
.TP
.B \-q
Run quietly.
.SH WARNINGS
While hard-linked repositories are less disk intensive than replicated
repositories, they are also more vulnerable to disk or file system
corruption. It is advisable to always have at least one recent copy
of a repository, rather than 100% hard-linked repositories.
.LP
It is possible to break all the links by recomputing the per file
checksums:
.DS
bk repocheck
bk -A admin -z
.DE
.SH NOTE
This command works only on filesystems which support hard links,
and only if both repositories are in the same file system.
.LP
On recent (2014) versions of Ubuntu (and other Linux distributions),
the use of hardlinks has been curtailed for security reasons.
See
http://man7.org/linux/man-pages/man5/proc.5.html
and search for
.BR protected_hardlinks .
The relink command will fail in this case.
.SH "SEE ALSO"
.SA clone
.SH CATEGORY
.B Repository
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>JSDoc: Module: axis/projection</title>
<script src="scripts/prettify/prettify.js"> </script>
<script src="scripts/prettify/lang-css.js"> </script>
<!--[if lt IE 9]>
<script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
<link type="text/css" rel="stylesheet" href="styles/prettify-tomorrow.css">
<link type="text/css" rel="stylesheet" href="styles/jsdoc-default.css">
</head>
<body>
<div id="main">
<h1 class="page-title">Module: axis/projection</h1>
<section>
<header>
</header>
<article>
<div class="container-overview">
<div class="description">The axis projection manager module.</div>
<dl class="details">
<dt class="tag-source">Source:</dt>
<dd class="tag-source"><ul class="dummy"><li>
<a href="projection.js.html">projection.js</a>, <a href="projection.js.html#line28">line 28</a>
</li></ul></dd>
</dl>
</div>
<h3 class="subsection-title">Classes</h3>
<dl>
<dt><a href="module-axis_projection-Projections.html">Projections</a></dt>
<dd></dd>
</dl>
</article>
</section>
</div>
<nav>
<h2><a href="index.html">Home</a></h2><h3>Modules</h3><ul><li><a href="module-axis.html">axis</a></li><li><a href="module-axis_constants.html">axis/constants</a></li><li><a href="module-axis_controls_controller.html">axis/controls/controller</a></li><li><a href="module-axis_controls_keyboard.html">axis/controls/keyboard</a></li><li><a href="module-axis_controls_movement.html">axis/controls/movement</a></li><li><a href="module-axis_controls_orientation.html">axis/controls/orientation</a></li><li><a href="module-axis_controls_pointer.html">axis/controls/pointer</a></li><li><a href="module-axis_controls_touch.html">axis/controls/touch</a></li><li><a href="module-axis_projection.html">axis/projection</a></li><li><a href="module-axis_projection_flat.html">axis/projection/flat</a></li><li><a href="module-axis_state.html">axis/state</a></li><li><a href="module-scope_projection_equilinear.html">scope/projection/equilinear</a></li><li><a href="module-scope_projection_fisheye.html">scope/projection/fisheye</a></li><li><a href="module-scope_projection_tinyplanet.html">scope/projection/tinyplanet</a></li></ul><h3>Classes</h3><ul><li><a href="module-axis_controls_controller.html">axis/controls/controller</a></li><li><a href="module-axis_controls_keyboard.KeyboardController.html">KeyboardController</a></li><li><a href="module-axis_controls_movement.MovementController.html">MovementController</a></li><li><a href="module-axis_controls_orientation.OrientationController.html">OrientationController</a></li><li><a href="module-axis_controls_pointer.PointerController.html">PointerController</a></li><li><a href="module-axis_controls_touch.TouchController.html">TouchController</a></li><li><a href="module-axis_projection-Projections.html">Projections</a></li><li><a href="module-axis_state-State.html">State</a></li><li><a href="module-axis-Axis.html">Axis</a></li></ul><h3>Events</h3><ul><li><a href="module-axis_state-State.html#event:ready">ready</a></li><li><a href="module-axis_state-State.html#event:update">update</a></li><li><a href="module-axis-Axis.html#event:click">click</a></li><li><a href="module-axis-Axis.html#event:fullscreenchange">fullscreenchange</a></li><li><a href="module-axis-Axis.html#event:keydown">keydown</a></li><li><a href="module-axis-Axis.html#event:ready">ready</a></li><li><a href="module-axis-Axis.html#event:vrhmdavailable">vrhmdavailable</a></li></ul><h3>Global</h3><ul><li><a href="global.html#createCamera">createCamera</a></li><li><a href="global.html#three">three</a></li></ul>
</nav>
<br class="clear">
<footer>
Documentation generated by <a href="https://github.com/jsdoc3/jsdoc">JSDoc 3.3.2</a> on Fri Aug 07 2015 16:47:54 GMT-0400 (EDT)
</footer>
<script> prettyPrint(); </script>
<script src="scripts/linenumber.js"> </script>
</body>
</html> | {
"pile_set_name": "Github"
} |
/******************************************************************************
*
* Module Name: exsystem - Interface to OS services
*
*****************************************************************************/
/******************************************************************************
*
* 1. Copyright Notice
*
* Some or all of this work - Copyright (c) 1999 - 2011, Intel Corp.
* All rights reserved.
*
* 2. License
*
* 2.1. This is your license from Intel Corp. under its intellectual property
* rights. You may have additional license terms from the party that provided
* you this software, covering your right to use that party's intellectual
* property rights.
*
* 2.2. Intel grants, free of charge, to any person ("Licensee") obtaining a
* copy of the source code appearing in this file ("Covered Code") an
* irrevocable, perpetual, worldwide license under Intel's copyrights in the
* base code distributed originally by Intel ("Original Intel Code") to copy,
* make derivatives, distribute, use and display any portion of the Covered
* Code in any form, with the right to sublicense such rights; and
*
* 2.3. Intel grants Licensee a non-exclusive and non-transferable patent
* license (with the right to sublicense), under only those claims of Intel
* patents that are infringed by the Original Intel Code, to make, use, sell,
* offer to sell, and import the Covered Code and derivative works thereof
* solely to the minimum extent necessary to exercise the above copyright
* license, and in no event shall the patent license extend to any additions
* to or modifications of the Original Intel Code. No other license or right
* is granted directly or by implication, estoppel or otherwise;
*
* The above copyright and patent license is granted only if the following
* conditions are met:
*
* 3. Conditions
*
* 3.1. Redistribution of Source with Rights to Further Distribute Source.
* Redistribution of source code of any substantial portion of the Covered
* Code or modification with rights to further distribute source must include
* the above Copyright Notice, the above License, this list of Conditions,
* and the following Disclaimer and Export Compliance provision. In addition,
* Licensee must cause all Covered Code to which Licensee contributes to
* contain a file documenting the changes Licensee made to create that Covered
* Code and the date of any change. Licensee must include in that file the
* documentation of any changes made by any predecessor Licensee. Licensee
* must include a prominent statement that the modification is derived,
* directly or indirectly, from Original Intel Code.
*
* 3.2. Redistribution of Source with no Rights to Further Distribute Source.
* Redistribution of source code of any substantial portion of the Covered
* Code or modification without rights to further distribute source must
* include the following Disclaimer and Export Compliance provision in the
* documentation and/or other materials provided with distribution. In
* addition, Licensee may not authorize further sublicense of source of any
* portion of the Covered Code, and must include terms to the effect that the
* license from Licensee to its licensee is limited to the intellectual
* property embodied in the software Licensee provides to its licensee, and
* not to intellectual property embodied in modifications its licensee may
* make.
*
* 3.3. Redistribution of Executable. Redistribution in executable form of any
* substantial portion of the Covered Code or modification must reproduce the
* above Copyright Notice, and the following Disclaimer and Export Compliance
* provision in the documentation and/or other materials provided with the
* distribution.
*
* 3.4. Intel retains all right, title, and interest in and to the Original
* Intel Code.
*
* 3.5. Neither the name Intel nor any other trademark owned or controlled by
* Intel shall be used in advertising or otherwise to promote the sale, use or
* other dealings in products derived from or relating to the Covered Code
* without prior written authorization from Intel.
*
* 4. Disclaimer and Export Compliance
*
* 4.1. INTEL MAKES NO WARRANTY OF ANY KIND REGARDING ANY SOFTWARE PROVIDED
* HERE. ANY SOFTWARE ORIGINATING FROM INTEL OR DERIVED FROM INTEL SOFTWARE
* IS PROVIDED "AS IS," AND INTEL WILL NOT PROVIDE ANY SUPPORT, ASSISTANCE,
* INSTALLATION, TRAINING OR OTHER SERVICES. INTEL WILL NOT PROVIDE ANY
* UPDATES, ENHANCEMENTS OR EXTENSIONS. INTEL SPECIFICALLY DISCLAIMS ANY
* IMPLIED WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT AND FITNESS FOR A
* PARTICULAR PURPOSE.
*
* 4.2. IN NO EVENT SHALL INTEL HAVE ANY LIABILITY TO LICENSEE, ITS LICENSEES
* OR ANY OTHER THIRD PARTY, FOR ANY LOST PROFITS, LOST DATA, LOSS OF USE OR
* COSTS OF PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT,
* SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THIS AGREEMENT, UNDER ANY
* CAUSE OF ACTION OR THEORY OF LIABILITY, AND IRRESPECTIVE OF WHETHER INTEL
* HAS ADVANCE NOTICE OF THE POSSIBILITY OF SUCH DAMAGES. THESE LIMITATIONS
* SHALL APPLY NOTWITHSTANDING THE FAILURE OF THE ESSENTIAL PURPOSE OF ANY
* LIMITED REMEDY.
*
* 4.3. Licensee shall not export, either directly or indirectly, any of this
* software or system incorporating such software without first obtaining any
* required license or other approval from the U. S. Department of Commerce or
* any other agency or department of the United States Government. In the
* event Licensee exports any such software from the United States or
* re-exports any such software from a foreign destination, Licensee shall
* ensure that the distribution and export/re-export of the software is in
* compliance with all laws, regulations, orders, or other restrictions of the
* U.S. Export Administration Regulations. Licensee agrees that neither it nor
* any of its subsidiaries will export/re-export any technical data, process,
* software, or service, directly or indirectly, to any country for which the
* United States government or any agency thereof requires an export license,
* other governmental approval, or letter of assurance, without first obtaining
* such license, approval or letter.
*
*****************************************************************************/
#define __EXSYSTEM_C__
#include "acpi/acpi.h"
#include "acpi/accommon.h"
#include "acpi/acinterp.h"
#define _COMPONENT ACPI_EXECUTER
ACPI_MODULE_NAME ("exsystem")
/*******************************************************************************
*
* FUNCTION: AcpiExSystemWaitSemaphore
*
* PARAMETERS: Semaphore - Semaphore to wait on
* Timeout - Max time to wait
*
* RETURN: Status
*
* DESCRIPTION: Implements a semaphore wait with a check to see if the
* semaphore is available immediately. If it is not, the
* interpreter is released before waiting.
*
******************************************************************************/
ACPI_STATUS
AcpiExSystemWaitSemaphore (
ACPI_SEMAPHORE Semaphore,
UINT16 Timeout)
{
ACPI_STATUS Status;
ACPI_FUNCTION_TRACE (ExSystemWaitSemaphore);
Status = AcpiOsWaitSemaphore (Semaphore, 1, ACPI_DO_NOT_WAIT);
if (ACPI_SUCCESS (Status))
{
return_ACPI_STATUS (Status);
}
if (Status == AE_TIME)
{
/* We must wait, so unlock the interpreter */
AcpiExRelinquishInterpreter ();
Status = AcpiOsWaitSemaphore (Semaphore, 1, Timeout);
ACPI_DEBUG_PRINT ((ACPI_DB_EXEC,
"*** Thread awake after blocking, %s\n",
AcpiFormatException (Status)));
/* Reacquire the interpreter */
AcpiExReacquireInterpreter ();
}
return_ACPI_STATUS (Status);
}
/*******************************************************************************
*
* FUNCTION: AcpiExSystemWaitMutex
*
* PARAMETERS: Mutex - Mutex to wait on
* Timeout - Max time to wait
*
* RETURN: Status
*
* DESCRIPTION: Implements a mutex wait with a check to see if the
* mutex is available immediately. If it is not, the
* interpreter is released before waiting.
*
******************************************************************************/
ACPI_STATUS
AcpiExSystemWaitMutex (
ACPI_MUTEX Mutex,
UINT16 Timeout)
{
ACPI_STATUS Status;
ACPI_FUNCTION_TRACE (ExSystemWaitMutex);
Status = AcpiOsAcquireMutex (Mutex, ACPI_DO_NOT_WAIT);
if (ACPI_SUCCESS (Status))
{
return_ACPI_STATUS (Status);
}
if (Status == AE_TIME)
{
/* We must wait, so unlock the interpreter */
AcpiExRelinquishInterpreter ();
Status = AcpiOsAcquireMutex (Mutex, Timeout);
ACPI_DEBUG_PRINT ((ACPI_DB_EXEC,
"*** Thread awake after blocking, %s\n",
AcpiFormatException (Status)));
/* Reacquire the interpreter */
AcpiExReacquireInterpreter ();
}
return_ACPI_STATUS (Status);
}
/*******************************************************************************
*
* FUNCTION: AcpiExSystemDoStall
*
* PARAMETERS: HowLong - The amount of time to stall,
* in microseconds
*
* RETURN: Status
*
* DESCRIPTION: Suspend running thread for specified amount of time.
* Note: ACPI specification requires that Stall() does not
* relinquish the processor, and delays longer than 100 usec
* should use Sleep() instead. We allow stalls up to 255 usec
* for compatibility with other interpreters and existing BIOSs.
*
******************************************************************************/
ACPI_STATUS
AcpiExSystemDoStall (
UINT32 HowLong)
{
ACPI_STATUS Status = AE_OK;
ACPI_FUNCTION_ENTRY ();
if (HowLong > 255) /* 255 microseconds */
{
/*
* Longer than 255 usec, this is an error
*
* (ACPI specifies 100 usec as max, but this gives some slack in
* order to support existing BIOSs)
*/
ACPI_ERROR ((AE_INFO, "Time parameter is too large (%u)",
HowLong));
Status = AE_AML_OPERAND_VALUE;
}
else
{
AcpiOsStall (HowLong);
}
return (Status);
}
/*******************************************************************************
*
* FUNCTION: AcpiExSystemDoSleep
*
* PARAMETERS: HowLong - The amount of time to sleep,
* in milliseconds
*
* RETURN: None
*
* DESCRIPTION: Sleep the running thread for specified amount of time.
*
******************************************************************************/
ACPI_STATUS
AcpiExSystemDoSleep (
UINT64 HowLong)
{
ACPI_FUNCTION_ENTRY ();
/* Since this thread will sleep, we must release the interpreter */
AcpiExRelinquishInterpreter ();
/*
* For compatibility with other ACPI implementations and to prevent
* accidental deep sleeps, limit the sleep time to something reasonable.
*/
if (HowLong > ACPI_MAX_SLEEP)
{
HowLong = ACPI_MAX_SLEEP;
}
AcpiOsSleep (HowLong);
/* And now we must get the interpreter again */
AcpiExReacquireInterpreter ();
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: AcpiExSystemSignalEvent
*
* PARAMETERS: ObjDesc - The object descriptor for this op
*
* RETURN: Status
*
* DESCRIPTION: Provides an access point to perform synchronization operations
* within the AML.
*
******************************************************************************/
ACPI_STATUS
AcpiExSystemSignalEvent (
ACPI_OPERAND_OBJECT *ObjDesc)
{
ACPI_STATUS Status = AE_OK;
ACPI_FUNCTION_TRACE (ExSystemSignalEvent);
if (ObjDesc)
{
Status = AcpiOsSignalSemaphore (ObjDesc->Event.OsSemaphore, 1);
}
return_ACPI_STATUS (Status);
}
/*******************************************************************************
*
* FUNCTION: AcpiExSystemWaitEvent
*
* PARAMETERS: TimeDesc - The 'time to delay' object descriptor
* ObjDesc - The object descriptor for this op
*
* RETURN: Status
*
* DESCRIPTION: Provides an access point to perform synchronization operations
* within the AML. This operation is a request to wait for an
* event.
*
******************************************************************************/
ACPI_STATUS
AcpiExSystemWaitEvent (
ACPI_OPERAND_OBJECT *TimeDesc,
ACPI_OPERAND_OBJECT *ObjDesc)
{
ACPI_STATUS Status = AE_OK;
ACPI_FUNCTION_TRACE (ExSystemWaitEvent);
if (ObjDesc)
{
Status = AcpiExSystemWaitSemaphore (ObjDesc->Event.OsSemaphore,
(UINT16) TimeDesc->Integer.Value);
}
return_ACPI_STATUS (Status);
}
/*******************************************************************************
*
* FUNCTION: AcpiExSystemResetEvent
*
* PARAMETERS: ObjDesc - The object descriptor for this op
*
* RETURN: Status
*
* DESCRIPTION: Reset an event to a known state.
*
******************************************************************************/
ACPI_STATUS
AcpiExSystemResetEvent (
ACPI_OPERAND_OBJECT *ObjDesc)
{
ACPI_STATUS Status = AE_OK;
ACPI_SEMAPHORE TempSemaphore;
ACPI_FUNCTION_ENTRY ();
/*
* We are going to simply delete the existing semaphore and
* create a new one!
*/
Status = AcpiOsCreateSemaphore (ACPI_NO_UNIT_LIMIT, 0, &TempSemaphore);
if (ACPI_SUCCESS (Status))
{
(void) AcpiOsDeleteSemaphore (ObjDesc->Event.OsSemaphore);
ObjDesc->Event.OsSemaphore = TempSemaphore;
}
return (Status);
}
| {
"pile_set_name": "Github"
} |
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: google/protobuf/unittest_lazy_dependencies_enum.proto
#ifndef PROTOBUF_google_2fprotobuf_2funittest_5flazy_5fdependencies_5fenum_2eproto__INCLUDED
#define PROTOBUF_google_2fprotobuf_2funittest_5flazy_5fdependencies_5fenum_2eproto__INCLUDED
#include <string>
#include <google/protobuf/stubs/common.h>
#if GOOGLE_PROTOBUF_VERSION < 3005000
#error This file was generated by a newer version of protoc which is
#error incompatible with your Protocol Buffer headers. Please update
#error your headers.
#endif
#if 3005001 < GOOGLE_PROTOBUF_MIN_PROTOC_VERSION
#error This file was generated by an older version of protoc which is
#error incompatible with your Protocol Buffer headers. Please
#error regenerate this file with a newer version of protoc.
#endif
#include <google/protobuf/io/coded_stream.h>
#include <google/protobuf/arena.h>
#include <google/protobuf/arenastring.h>
#include <google/protobuf/generated_message_table_driven.h>
#include <google/protobuf/generated_message_util.h>
#include <google/protobuf/metadata.h>
#include <google/protobuf/repeated_field.h> // IWYU pragma: export
#include <google/protobuf/extension_set.h> // IWYU pragma: export
#include <google/protobuf/generated_enum_reflection.h>
// @@protoc_insertion_point(includes)
namespace protobuf_google_2fprotobuf_2funittest_5flazy_5fdependencies_5fenum_2eproto {
// Internal implementation detail -- do not use these members.
struct TableStruct {
static const ::google::protobuf::internal::ParseTableField entries[];
static const ::google::protobuf::internal::AuxillaryParseTableField aux[];
static const ::google::protobuf::internal::ParseTable schema[1];
static const ::google::protobuf::internal::FieldMetadata field_metadata[];
static const ::google::protobuf::internal::SerializationTable serialization_table[];
static const ::google::protobuf::uint32 offsets[];
};
void AddDescriptors();
inline void InitDefaults() {
}
} // namespace protobuf_google_2fprotobuf_2funittest_5flazy_5fdependencies_5fenum_2eproto
namespace protobuf_unittest {
namespace lazy_imports {
} // namespace lazy_imports
} // namespace protobuf_unittest
namespace protobuf_unittest {
namespace lazy_imports {
enum LazyEnum {
LAZY_ENUM_0 = 0,
LAZY_ENUM_1 = 1
};
bool LazyEnum_IsValid(int value);
const LazyEnum LazyEnum_MIN = LAZY_ENUM_0;
const LazyEnum LazyEnum_MAX = LAZY_ENUM_1;
const int LazyEnum_ARRAYSIZE = LazyEnum_MAX + 1;
const ::google::protobuf::EnumDescriptor* LazyEnum_descriptor();
inline const ::std::string& LazyEnum_Name(LazyEnum value) {
return ::google::protobuf::internal::NameOfEnum(
LazyEnum_descriptor(), value);
}
inline bool LazyEnum_Parse(
const ::std::string& name, LazyEnum* value) {
return ::google::protobuf::internal::ParseNamedEnum<LazyEnum>(
LazyEnum_descriptor(), name, value);
}
// ===================================================================
// ===================================================================
// ===================================================================
#ifdef __GNUC__
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wstrict-aliasing"
#endif // __GNUC__
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif // __GNUC__
// @@protoc_insertion_point(namespace_scope)
} // namespace lazy_imports
} // namespace protobuf_unittest
namespace google {
namespace protobuf {
template <> struct is_proto_enum< ::protobuf_unittest::lazy_imports::LazyEnum> : ::google::protobuf::internal::true_type {};
template <>
inline const EnumDescriptor* GetEnumDescriptor< ::protobuf_unittest::lazy_imports::LazyEnum>() {
return ::protobuf_unittest::lazy_imports::LazyEnum_descriptor();
}
} // namespace protobuf
} // namespace google
// @@protoc_insertion_point(global_scope)
#endif // PROTOBUF_google_2fprotobuf_2funittest_5flazy_5fdependencies_5fenum_2eproto__INCLUDED
| {
"pile_set_name": "Github"
} |
@import '../../../../common';
.audits {
line-height: 1rem;
}
.table tbody tr td {
padding-bottom: 0px;
padding-top: 0px;
}
.ui.table thead th.pl50 {
padding-left: 50px;
}
| {
"pile_set_name": "Github"
} |
// Copyright (c) Microsoft Corporation. All rights reserved.
using System;
using System.Collections.Generic;
namespace ColorSyntax.Parsing
{
public interface ILanguageParser
{
void Parse(string sourceCode,
ILanguage language,
Action<string, IList<Scope>> parseHandler);
}
} | {
"pile_set_name": "Github"
} |
/*
* Copyright 2017 Google LLC
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following disclaimer
* in the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Google LLC nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package com.google.api.gax.grpc;
import com.google.api.gax.longrunning.OperationSnapshot;
import com.google.api.gax.rpc.StatusCode;
import com.google.longrunning.Operation;
import io.grpc.Status;
/**
* Implementation of OperationSnapshot based on gRPC.
*
* <p>Package-private for internal usage.
*/
class GrpcOperationSnapshot implements OperationSnapshot {
private final Operation operation;
public GrpcOperationSnapshot(Operation operation) {
this.operation = operation;
}
@Override
public String getName() {
return operation.getName();
}
@Override
public Object getMetadata() {
return operation.getMetadata();
}
@Override
public boolean isDone() {
return operation.getDone();
}
@Override
public Object getResponse() {
return operation.getResponse();
}
@Override
public StatusCode getErrorCode() {
return GrpcStatusCode.of(Status.fromCodeValue(operation.getError().getCode()).getCode());
}
@Override
public String getErrorMessage() {
return operation.getError().getMessage();
}
public static GrpcOperationSnapshot create(Operation operation) {
return new GrpcOperationSnapshot(operation);
}
}
| {
"pile_set_name": "Github"
} |
<?php
/*
* This file is part of SwiftMailer.
* (c) 2004-2009 Chris Corbyn
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
/**
* Provides an abstract way of specifying recipients for batch sending.
*
* @author Chris Corbyn
*/
interface Swift_Mailer_RecipientIterator{
/**
* Returns true only if there are more recipients to send to.
*
* @return bool
*/
public function hasNext();
/**
* Returns an array where the keys are the addresses of recipients and the
* values are the names. e.g. ('foo@bar' => 'Foo') or ('foo@bar' => NULL).
*
* @return array
*/
public function nextRecipient();
}
| {
"pile_set_name": "Github"
} |
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\SoftDeletes;
use Hrshadhin\Userstamps\UserstampsTrait;
use App\Http\Helpers\AppHelper;
use Illuminate\Support\Arr;
class Student extends Model
{
use SoftDeletes;
use UserstampsTrait;
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = [
'user_id',
'name',
'nick_name',
'dob',
'gender',
'religion',
'blood_group',
'nationality',
'photo',
'email',
'phone_no',
'extra_activity',
'note',
'father_name',
'father_phone_no',
'mother_name',
'mother_phone_no',
'guardian',
'guardian_phone_no',
'present_address',
'permanent_address',
'sms_receive_no',
'siblings',
'status',
];
public function registration()
{
return $this->hasMany('App\Registration', 'student_id');
}
public function getGenderAttribute($value)
{
return Arr::get(AppHelper::GENDER, $value);
}
public function getReligionAttribute($value)
{
return Arr::get(AppHelper::RELIGION, $value);
}
public function getBloodGroupAttribute($value)
{
if($value) {
return Arr::get(AppHelper::BLOOD_GROUP, $value);
}
return "";
}
}
| {
"pile_set_name": "Github"
} |
## Summary
In this workshop, you have worked with OpenShift and learned about how Helm 3 integration helps developers to deploy and run applications.
As follow-up of this workshop, we recommend the [Operator SDK with Helm](https://learn.openshift.com/operatorframework/helm-operator/) workshop, also available as self-paced lab.
We hope you have found this workshop helpful in learning about Helm on OpenShift, and would love any feedback you have on ways to make it better! Feel free to open issues in this workshop’s [GitHub repository](https://github.com/openshift-labs/learn-katacoda).
To learn more about Helm, the resources below can provide information on everything from getting started to more advanced concepts.
Helm Documentation: https://helm.sh
Getting Started with Helm 3 on OpenShift: https://docs.openshift.com/container-platform/4.4/cli_reference/helm_cli/getting-started-with-helm-on-openshift-container-platform.html
Read more in the Developers blog:
https://developers.redhat.com/blog/2020/04/30/application-deployment-improvements-in-openshift-4-4/
Try it out on your workstation! Download [CodeReady Containers](https://developers.redhat.com/products/codeready-containers/overview) to have a local instance of OpenShift Container Platform 4 where to run Helm Charts: https://www.openshift.com/try
| {
"pile_set_name": "Github"
} |
StartChar: nine
Encoding: 57 57 82
Width: 466
VWidth: 0
Flags: W
HStem: -186 18<41.9316 75.6465> 89 53<168.428 255.987> 378 36<172.378 277.51>
VStem: 53 69<192.647 300.858> 346 80<102.36 310.427>
LayerCount: 2
Fore
SplineSet
251 414 m 0
348 414 426 328 426 210 c 0
426 96 354 6 300 -47 c 0
245 -101 124 -176 55 -186 c 0
49 -187 41 -187 41 -181 c 24
41 -174 48 -170 55 -168 c 0
112 -150 228 -56 270 0 c 0
306 48 346 114 346 227 c 0
346 310 291 378 218 378 c 0
153 378 122 316 122 267 c 0
122 206 167 142 220 142 c 0
244 142 255 145 268 151 c 0
280 156 290 163 300 168 c 0
305 170 309 161 307 157 c 0
300 144 285 130 274 122 c 0
259 111 219 89 182 89 c 0
113 89 53 143 53 215 c 0
53 306 130 414 251 414 c 0
EndSplineSet
Validated: 1
Substitution2: "'tnum' Tabular Numbers lookup 7 subtable" nine.taboldstyle
Substitution2: "'lnum' Lining Figures lookup 5 subtable" nine.lnum
EndChar
| {
"pile_set_name": "Github"
} |
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fake
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
labels "k8s.io/apimachinery/pkg/labels"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
v1 "k8s.io/client-go/pkg/api/v1"
testing "k8s.io/client-go/testing"
)
// FakeNodes implements NodeInterface
type FakeNodes struct {
Fake *FakeCoreV1
}
var nodesResource = schema.GroupVersionResource{Group: "", Version: "v1", Resource: "nodes"}
func (c *FakeNodes) Create(node *v1.Node) (result *v1.Node, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootCreateAction(nodesResource, node), &v1.Node{})
if obj == nil {
return nil, err
}
return obj.(*v1.Node), err
}
func (c *FakeNodes) Update(node *v1.Node) (result *v1.Node, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateAction(nodesResource, node), &v1.Node{})
if obj == nil {
return nil, err
}
return obj.(*v1.Node), err
}
func (c *FakeNodes) UpdateStatus(node *v1.Node) (*v1.Node, error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateSubresourceAction(nodesResource, "status", node), &v1.Node{})
if obj == nil {
return nil, err
}
return obj.(*v1.Node), err
}
func (c *FakeNodes) Delete(name string, options *meta_v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteAction(nodesResource, name), &v1.Node{})
return err
}
func (c *FakeNodes) DeleteCollection(options *meta_v1.DeleteOptions, listOptions meta_v1.ListOptions) error {
action := testing.NewRootDeleteCollectionAction(nodesResource, listOptions)
_, err := c.Fake.Invokes(action, &v1.NodeList{})
return err
}
func (c *FakeNodes) Get(name string, options meta_v1.GetOptions) (result *v1.Node, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootGetAction(nodesResource, name), &v1.Node{})
if obj == nil {
return nil, err
}
return obj.(*v1.Node), err
}
func (c *FakeNodes) List(opts meta_v1.ListOptions) (result *v1.NodeList, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootListAction(nodesResource, opts), &v1.NodeList{})
if obj == nil {
return nil, err
}
label, _, _ := testing.ExtractFromListOptions(opts)
if label == nil {
label = labels.Everything()
}
list := &v1.NodeList{}
for _, item := range obj.(*v1.NodeList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
}
}
return list, err
}
// Watch returns a watch.Interface that watches the requested nodes.
func (c *FakeNodes) Watch(opts meta_v1.ListOptions) (watch.Interface, error) {
return c.Fake.
InvokesWatch(testing.NewRootWatchAction(nodesResource, opts))
}
// Patch applies the patch and returns the patched node.
func (c *FakeNodes) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Node, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootPatchSubresourceAction(nodesResource, name, data, subresources...), &v1.Node{})
if obj == nil {
return nil, err
}
return obj.(*v1.Node), err
}
| {
"pile_set_name": "Github"
} |
/*
* CODE GENERATED AUTOMATICALLY WITH github.com/stretchr/testify/_codegen
* THIS FILE MUST NOT BE EDITED BY HAND
*/
package assert
import (
http "net/http"
url "net/url"
time "time"
)
// Conditionf uses a Comparison to assert a complex condition.
func Conditionf(t TestingT, comp Comparison, msg string, args ...interface{}) bool {
return Condition(t, comp, append([]interface{}{msg}, args...)...)
}
// Containsf asserts that the specified string, list(array, slice...) or map contains the
// specified substring or element.
//
// assert.Containsf(t, "Hello World", "World", "error message %s", "formatted")
// assert.Containsf(t, ["Hello", "World"], "World", "error message %s", "formatted")
// assert.Containsf(t, {"Hello": "World"}, "Hello", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Containsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool {
return Contains(t, s, contains, append([]interface{}{msg}, args...)...)
}
// Emptyf asserts that the specified object is empty. I.e. nil, "", false, 0 or either
// a slice or a channel with len == 0.
//
// assert.Emptyf(t, obj, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Emptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
return Empty(t, object, append([]interface{}{msg}, args...)...)
}
// Equalf asserts that two objects are equal.
//
// assert.Equalf(t, 123, 123, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
//
// Pointer variable equality is determined based on the equality of the
// referenced values (as opposed to the memory addresses). Function equality
// cannot be determined and will always fail.
func Equalf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
return Equal(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// EqualErrorf asserts that a function returned an error (i.e. not `nil`)
// and that it is equal to the provided error.
//
// actualObj, err := SomeFunction()
// assert.EqualErrorf(t, err, expectedErrorString, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func EqualErrorf(t TestingT, theError error, errString string, msg string, args ...interface{}) bool {
return EqualError(t, theError, errString, append([]interface{}{msg}, args...)...)
}
// EqualValuesf asserts that two objects are equal or convertable to the same types
// and equal.
//
// assert.EqualValuesf(t, uint32(123, "error message %s", "formatted"), int32(123))
//
// Returns whether the assertion was successful (true) or not (false).
func EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
return EqualValues(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// Errorf asserts that a function returned an error (i.e. not `nil`).
//
// actualObj, err := SomeFunction()
// if assert.Errorf(t, err, "error message %s", "formatted") {
// assert.Equal(t, expectedErrorf, err)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func Errorf(t TestingT, err error, msg string, args ...interface{}) bool {
return Error(t, err, append([]interface{}{msg}, args...)...)
}
// Exactlyf asserts that two objects are equal is value and type.
//
// assert.Exactlyf(t, int32(123, "error message %s", "formatted"), int64(123))
//
// Returns whether the assertion was successful (true) or not (false).
func Exactlyf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
return Exactly(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// Failf reports a failure through
func Failf(t TestingT, failureMessage string, msg string, args ...interface{}) bool {
return Fail(t, failureMessage, append([]interface{}{msg}, args...)...)
}
// FailNowf fails test
func FailNowf(t TestingT, failureMessage string, msg string, args ...interface{}) bool {
return FailNow(t, failureMessage, append([]interface{}{msg}, args...)...)
}
// Falsef asserts that the specified value is false.
//
// assert.Falsef(t, myBool, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Falsef(t TestingT, value bool, msg string, args ...interface{}) bool {
return False(t, value, append([]interface{}{msg}, args...)...)
}
// HTTPBodyContainsf asserts that a specified handler returns a
// body that contains a string.
//
// assert.HTTPBodyContainsf(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPBodyContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}) bool {
return HTTPBodyContains(t, handler, method, url, values, str)
}
// HTTPBodyNotContainsf asserts that a specified handler returns a
// body that does not contain a string.
//
// assert.HTTPBodyNotContainsf(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPBodyNotContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}) bool {
return HTTPBodyNotContains(t, handler, method, url, values, str)
}
// HTTPErrorf asserts that a specified handler returns an error status code.
//
// assert.HTTPErrorf(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}}
//
// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false).
func HTTPErrorf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values) bool {
return HTTPError(t, handler, method, url, values)
}
// HTTPRedirectf asserts that a specified handler returns a redirect status code.
//
// assert.HTTPRedirectf(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}}
//
// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false).
func HTTPRedirectf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values) bool {
return HTTPRedirect(t, handler, method, url, values)
}
// HTTPSuccessf asserts that a specified handler returns a success status code.
//
// assert.HTTPSuccessf(t, myHandler, "POST", "http://www.google.com", nil, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPSuccessf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values) bool {
return HTTPSuccess(t, handler, method, url, values)
}
// Implementsf asserts that an object is implemented by the specified interface.
//
// assert.Implementsf(t, (*MyInterface, "error message %s", "formatted")(nil), new(MyObject))
func Implementsf(t TestingT, interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool {
return Implements(t, interfaceObject, object, append([]interface{}{msg}, args...)...)
}
// InDeltaf asserts that the two numerals are within delta of each other.
//
// assert.InDeltaf(t, math.Pi, (22 / 7.0, "error message %s", "formatted"), 0.01)
//
// Returns whether the assertion was successful (true) or not (false).
func InDeltaf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {
return InDelta(t, expected, actual, delta, append([]interface{}{msg}, args...)...)
}
// InDeltaSlicef is the same as InDelta, except it compares two slices.
func InDeltaSlicef(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {
return InDeltaSlice(t, expected, actual, delta, append([]interface{}{msg}, args...)...)
}
// InEpsilonf asserts that expected and actual have a relative error less than epsilon
//
// Returns whether the assertion was successful (true) or not (false).
func InEpsilonf(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool {
return InEpsilon(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...)
}
// InEpsilonSlicef is the same as InEpsilon, except it compares each value from two slices.
func InEpsilonSlicef(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool {
return InEpsilonSlice(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...)
}
// IsTypef asserts that the specified objects are of the same type.
func IsTypef(t TestingT, expectedType interface{}, object interface{}, msg string, args ...interface{}) bool {
return IsType(t, expectedType, object, append([]interface{}{msg}, args...)...)
}
// JSONEqf asserts that two JSON strings are equivalent.
//
// assert.JSONEqf(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func JSONEqf(t TestingT, expected string, actual string, msg string, args ...interface{}) bool {
return JSONEq(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// Lenf asserts that the specified object has specific length.
// Lenf also fails if the object has a type that len() not accept.
//
// assert.Lenf(t, mySlice, 3, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Lenf(t TestingT, object interface{}, length int, msg string, args ...interface{}) bool {
return Len(t, object, length, append([]interface{}{msg}, args...)...)
}
// Nilf asserts that the specified object is nil.
//
// assert.Nilf(t, err, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Nilf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
return Nil(t, object, append([]interface{}{msg}, args...)...)
}
// NoErrorf asserts that a function returned no error (i.e. `nil`).
//
// actualObj, err := SomeFunction()
// if assert.NoErrorf(t, err, "error message %s", "formatted") {
// assert.Equal(t, expectedObj, actualObj)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func NoErrorf(t TestingT, err error, msg string, args ...interface{}) bool {
return NoError(t, err, append([]interface{}{msg}, args...)...)
}
// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the
// specified substring or element.
//
// assert.NotContainsf(t, "Hello World", "Earth", "error message %s", "formatted")
// assert.NotContainsf(t, ["Hello", "World"], "Earth", "error message %s", "formatted")
// assert.NotContainsf(t, {"Hello": "World"}, "Earth", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool {
return NotContains(t, s, contains, append([]interface{}{msg}, args...)...)
}
// NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either
// a slice or a channel with len == 0.
//
// if assert.NotEmptyf(t, obj, "error message %s", "formatted") {
// assert.Equal(t, "two", obj[1])
// }
//
// Returns whether the assertion was successful (true) or not (false).
func NotEmptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
return NotEmpty(t, object, append([]interface{}{msg}, args...)...)
}
// NotEqualf asserts that the specified values are NOT equal.
//
// assert.NotEqualf(t, obj1, obj2, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
//
// Pointer variable equality is determined based on the equality of the
// referenced values (as opposed to the memory addresses).
func NotEqualf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
return NotEqual(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// NotNilf asserts that the specified object is not nil.
//
// assert.NotNilf(t, err, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func NotNilf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
return NotNil(t, object, append([]interface{}{msg}, args...)...)
}
// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic.
//
// assert.NotPanicsf(t, func(){ RemainCalm() }, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func NotPanicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool {
return NotPanics(t, f, append([]interface{}{msg}, args...)...)
}
// NotRegexpf asserts that a specified regexp does not match a string.
//
// assert.NotRegexpf(t, regexp.MustCompile("starts", "error message %s", "formatted"), "it's starting")
// assert.NotRegexpf(t, "^start", "it's not starting", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func NotRegexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool {
return NotRegexp(t, rx, str, append([]interface{}{msg}, args...)...)
}
// NotSubsetf asserts that the specified list(array, slice...) contains not all
// elements given in the specified subset(array, slice...).
//
// assert.NotSubsetf(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func NotSubsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool {
return NotSubset(t, list, subset, append([]interface{}{msg}, args...)...)
}
// NotZerof asserts that i is not the zero value for its type and returns the truth.
func NotZerof(t TestingT, i interface{}, msg string, args ...interface{}) bool {
return NotZero(t, i, append([]interface{}{msg}, args...)...)
}
// Panicsf asserts that the code inside the specified PanicTestFunc panics.
//
// assert.Panicsf(t, func(){ GoCrazy() }, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Panicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool {
return Panics(t, f, append([]interface{}{msg}, args...)...)
}
// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that
// the recovered panic value equals the expected panic value.
//
// assert.PanicsWithValuef(t, "crazy error", func(){ GoCrazy() }, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func PanicsWithValuef(t TestingT, expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool {
return PanicsWithValue(t, expected, f, append([]interface{}{msg}, args...)...)
}
// Regexpf asserts that a specified regexp matches a string.
//
// assert.Regexpf(t, regexp.MustCompile("start", "error message %s", "formatted"), "it's starting")
// assert.Regexpf(t, "start...$", "it's not starting", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Regexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool {
return Regexp(t, rx, str, append([]interface{}{msg}, args...)...)
}
// Subsetf asserts that the specified list(array, slice...) contains all
// elements given in the specified subset(array, slice...).
//
// assert.Subsetf(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]", "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Subsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool {
return Subset(t, list, subset, append([]interface{}{msg}, args...)...)
}
// Truef asserts that the specified value is true.
//
// assert.Truef(t, myBool, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func Truef(t TestingT, value bool, msg string, args ...interface{}) bool {
return True(t, value, append([]interface{}{msg}, args...)...)
}
// WithinDurationf asserts that the two times are within duration delta of each other.
//
// assert.WithinDurationf(t, time.Now(), time.Now(), 10*time.Second, "error message %s", "formatted")
//
// Returns whether the assertion was successful (true) or not (false).
func WithinDurationf(t TestingT, expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool {
return WithinDuration(t, expected, actual, delta, append([]interface{}{msg}, args...)...)
}
// Zerof asserts that i is the zero value for its type and returns the truth.
func Zerof(t TestingT, i interface{}, msg string, args ...interface{}) bool {
return Zero(t, i, append([]interface{}{msg}, args...)...)
}
| {
"pile_set_name": "Github"
} |
# frozen_string_literal: true
# ElasticSearch relative tasks
namespace :fablab do
namespace :es do
desc '(re)Build ElasticSearch fablab base for stats'
task build_stats: :environment do
delete_stats_index
create_stats_index
create_stats_mappings
add_event_filters
end
def delete_stats_index
puts 'DELETE stats'
`curl -XDELETE http://#{ENV['ELASTICSEARCH_HOST']}:9200/stats`
end
def create_stats_index
puts 'PUT index stats'
`curl -XPUT http://#{ENV['ELASTICSEARCH_HOST']}:9200/stats -d'
{
"settings" : {
"index" : {
"number_of_replicas" : 0
}
}
}
'`
end
def create_stats_mappings
%w[account event machine project subscription training user space].each do |stat|
puts "PUT Mapping stats/#{stat}"
`curl -XPUT http://#{ENV['ELASTICSEARCH_HOST']}:9200/stats/#{stat}/_mapping -d '
{
"properties": {
"type": {
"type": "string",
"index" : "not_analyzed"
},
"subType": {
"type": "string",
"index" : "not_analyzed"
},
"date": {
"type": "date"
},
"name": {
"type": "string",
"index" : "not_analyzed"
}
}
}';`
end
end
desc 'add event filters to statistics'
task add_event_filters: :environment do
add_event_filters
end
def add_event_filters
`curl -XPUT http://#{ENV['ELASTICSEARCH_HOST']}:9200/stats/event/_mapping -d '
{
"properties": {
"ageRange": {
"type": "string",
"index" : "not_analyzed"
},
"eventTheme": {
"type": "string",
"index" : "not_analyzed"
}
}
}';`
end
desc 'add spaces reservations to statistics'
task add_spaces: :environment do
`curl -XPUT http://#{ENV['ELASTICSEARCH_HOST']}:9200/stats/space/_mapping -d '
{
"properties": {
"type": {
"type": "string",
"index" : "not_analyzed"
},
"subType": {
"type": "string",
"index" : "not_analyzed"
},
"date": {
"type": "date"
},
"name": {
"type": "string",
"index" : "not_analyzed"
}
}
}';`
end
desc 'sync all/one availabilities in ElasticSearch index'
task :build_availabilities_index, [:id] => :environment do |_task, args|
client = Availability.__elasticsearch__.client
# create index if not exists
Availability.__elasticsearch__.create_index! force: true unless client.indices.exists? index: Availability.index_name
# delete doctype if exists
if client.indices.exists_type? index: Availability.index_name, type: Availability.document_type
client.indices.delete_mapping index: Availability.index_name, type: Availability.document_type
end
# create doctype
client.indices.put_mapping index: Availability.index_name,
type: Availability.document_type,
body: Availability.mappings.to_hash
# verify doctype creation was successful
if client.indices.exists_type? index: Availability.index_name, type: Availability.document_type
puts "[ElasticSearch] #{Availability.index_name}/#{Availability.document_type} successfully created with its mapping."
# index requested documents
if args.id
AvailabilityIndexerWorker.perform_async(:index, id)
else
Availability.pluck(:id).each do |availability_id|
AvailabilityIndexerWorker.perform_async(:index, availability_id)
end
end
else
puts "[ElasticSearch] An error occurred while creating #{Availability.index_name}/#{Availability.document_type}. " \
'Please check your ElasticSearch configuration.'
puts "\nCancelling..."
end
end
desc '(re)generate statistics in ElasticSearch for the past period. Use 0 to generate for today'
task :generate_stats, [:period] => :environment do |_task, args|
raise 'FATAL ERROR: You must pass a number of days (=> past period) OR a date to generate statistics' unless args.period
unless Setting.get('statistics_module')
print 'Statistics are disabled. Do you still want to generate? (y/N) '
confirm = STDIN.gets.chomp
raise 'Interrupted by user' unless confirm == 'y'
end
worker = PeriodStatisticsWorker.new
worker.perform(args.period)
end
end
end
| {
"pile_set_name": "Github"
} |
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
</ItemGroup>
<ItemGroup>
</ItemGroup>
<ItemGroup>
</ItemGroup>
</Project> | {
"pile_set_name": "Github"
} |
{if $full_page}
{include file="pageheader.htm"}
{insert_scripts files="../js/utils.js,listtable.js"}
<div class="list-div" id="listDiv">
{/if}
<table cellspacing='1' cellpadding='3' id='list-table'>
<tr>
<th>{$lang.user_name}</th>
<th>{$lang.role_describe}</th>
<th>{$lang.handler}</th>
</tr>
{foreach from=$admin_list item=list}
<tr>
<td class="first-cell" >{$list.role_name}</td>
<td class="first-cell" >{$list.role_describe}</td>
<td align="center">
<a href="role.php?act=edit&id={$list.role_id}" title="{$lang.edit}"><img src="images/icon_edit.gif" border="0" height="16" width="16"></a>
<a href="javascript:;" onclick="listTable.remove({$list.role_id}, '{$lang.drop_confirm}')" title="{$lang.remove}"><img src="images/icon_drop.gif" border="0" height="16" width="16"></a></td>
</tr>
{/foreach}
</table>
{if $full_page}
</div>
<script type="text/javascript" language="JavaScript">
{literal}
onload = function()
{
// 开始检查订单
startCheckOrder();
}
{/literal}
</script>
{include file="pagefooter.htm"}
{/if}
| {
"pile_set_name": "Github"
} |
<!--
Copyright 2018 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, minimum-scale=1.0, initial-scale=1.0, user-scalable=yes">
<meta charset="utf-8">
<title>Promises Lab</title>
<link rel="stylesheet" href="styles/main.css">
</head>
<body>
<header>
<h1>Promises Lab</h1>
</header>
<main>
<label for="country">Country Name:</label>
<input id="country" type="text" placeholder="enter country name"><br><br>
<button id="get-image-name">Get Image Name</button><br><br>
<button id="fetch-flag-image">Fetch Flag Image</button><br><br>
<div class="img-container" id="img-container">
<!-- image added dynamically -->
</div>
</main>
<footer>
<a href="https://github.com/google-developer-training/pwa-training-labs">GitHub</a>
</footer>
<script src="js/main.js"></script>
<script>
const country = document.getElementById('country');
const getImageNameButton = document.getElementById('get-image-name');
getImageNameButton.addEventListener('click', function() {
app.getImageName(country.value);
});
const flagChainButton = document.getElementById('fetch-flag-image');
flagChainButton.addEventListener('click', function() {
app.flagChain(country.value);
});
</script>
</body>
</html>
| {
"pile_set_name": "Github"
} |
import { NodePath } from "@babel/traverse";
import * as t from "@babel/types";
import { last } from "../array-helpers";
export {
isClassPropertyIdentifier,
isVariableDeclarationIdentifier,
isFunctionCallIdentifier,
isJSXPartialElement,
isPartOfMemberExpression,
isArrayExpressionElement,
areAllObjectProperties,
isUndefinedLiteral,
isGuardClause,
isGuardConsequentBlock,
isNonEmptyReturn,
hasFinalReturn,
isTruthy,
isFalsy,
areSameAssignments,
areEquivalent,
isTemplateExpression,
isInBranchedLogic,
isInAlternate,
areOpposite,
areOppositeOperators,
getOppositeOperator,
canBeShorthand
};
function isClassPropertyIdentifier(path: NodePath): boolean {
return (
t.isClassProperty(path.parent) &&
!path.parent.computed &&
t.isIdentifier(path)
);
}
function isVariableDeclarationIdentifier(path: NodePath): boolean {
return t.isVariableDeclarator(path.parent) && t.isIdentifier(path);
}
function isFunctionCallIdentifier(path: NodePath): boolean {
return t.isCallExpression(path.parent) && path.parent.callee === path.node;
}
function isJSXPartialElement(path: NodePath): boolean {
return t.isJSXOpeningElement(path) || t.isJSXClosingElement(path);
}
function isPartOfMemberExpression(path: NodePath): boolean {
return t.isMemberExpression(path.parent) && t.isIdentifier(path);
}
function isArrayExpressionElement(
node: t.Node | null
): node is null | t.Expression | t.SpreadElement {
return node === null || t.isExpression(node) || t.isSpreadElement(node);
}
function areAllObjectProperties(
nodes: (t.Node | null)[]
): nodes is t.ObjectProperty[] {
return nodes.every((node) => t.isObjectProperty(node));
}
function isUndefinedLiteral(
node: object | null | undefined,
opts?: object | null
): node is t.Identifier {
return t.isIdentifier(node, opts) && node.name === "undefined";
}
function isGuardClause(path: NodePath<t.IfStatement>) {
const { consequent, alternate } = path.node;
if (Boolean(alternate)) return false;
return t.isReturnStatement(consequent) || isGuardConsequentBlock(consequent);
}
function isTruthy(test: t.Expression): boolean {
return areEquivalent(test, t.booleanLiteral(true));
}
function isFalsy(test: t.Expression): boolean {
return areEquivalent(test, t.booleanLiteral(false));
}
function isGuardConsequentBlock(
consequent: t.IfStatement["consequent"]
): consequent is t.BlockStatement {
return t.isBlockStatement(consequent) && hasFinalReturn(consequent.body);
}
function isNonEmptyReturn(node: t.Node) {
return t.isReturnStatement(node) && node.argument !== null;
}
function hasFinalReturn(statements: t.Statement[]): boolean {
return t.isReturnStatement(last(statements));
}
function areSameAssignments(
expressionA: t.AssignmentExpression,
expressionB: t.AssignmentExpression
): boolean {
return (
areEquivalent(expressionA.left, expressionB.left) &&
expressionA.operator === expressionB.operator
);
}
function areEquivalent(nodeA: t.Node | null, nodeB: t.Node | null): boolean {
if (nodeA === null) return false;
if (nodeB === null) return false;
if (t.isNullLiteral(nodeA) && t.isNullLiteral(nodeB)) return true;
if (isUndefinedLiteral(nodeA) && isUndefinedLiteral(nodeB)) return true;
if (t.isThisExpression(nodeA) && t.isThisExpression(nodeB)) return true;
// Arrays
if (t.isArrayExpression(nodeA) && t.isArrayExpression(nodeB)) {
return areAllEqual(nodeA.elements, nodeB.elements);
}
// Objects
if (t.isObjectExpression(nodeA) && t.isObjectExpression(nodeB)) {
return areAllEqual(nodeA.properties, nodeB.properties);
}
if (t.isObjectProperty(nodeA) && t.isObjectProperty(nodeB)) {
return (
areEquivalent(nodeA.key, nodeB.key) &&
areEquivalent(nodeA.value, nodeB.value)
);
}
// Identifiers
if (t.isIdentifier(nodeA) && t.isIdentifier(nodeB)) {
return nodeA.name === nodeB.name;
}
// Functions
if (
t.isArrowFunctionExpression(nodeA) &&
t.isArrowFunctionExpression(nodeB)
) {
return areEquivalent(nodeA.body, nodeB.body);
}
// Call Expressions
if (t.isCallExpression(nodeA) && t.isCallExpression(nodeB)) {
return (
areEquivalent(nodeA.callee, nodeB.callee) &&
areAllEqual(nodeA.arguments, nodeB.arguments)
);
}
// Binary & Logical Expressions
if (
(t.isLogicalExpression(nodeA) && t.isLogicalExpression(nodeB)) ||
(t.isBinaryExpression(nodeA) && t.isBinaryExpression(nodeB))
) {
return (
nodeA.operator === nodeB.operator &&
areEquivalent(nodeA.left, nodeB.left) &&
areEquivalent(nodeA.right, nodeB.right)
);
}
// Unary Expressions
if (t.isUnaryExpression(nodeA) && t.isUnaryExpression(nodeB)) {
return (
nodeA.operator === nodeB.operator &&
areEquivalent(nodeA.argument, nodeB.argument)
);
}
// Member Expressions
if (t.isMemberExpression(nodeA) && t.isMemberExpression(nodeB)) {
return (
areEquivalent(nodeA.property, nodeB.property) &&
areEquivalent(nodeA.object, nodeB.object)
);
}
// New Expressions
if (t.isNewExpression(nodeA) && t.isNewExpression(nodeB)) {
return (
areEquivalent(nodeA.callee, nodeB.callee) &&
areAllEqual(nodeA.arguments, nodeB.arguments)
);
}
// JSX Elements
if (t.isJSXElement(nodeA) && t.isJSXElement(nodeB)) {
const areClosingElementsEqual =
(nodeA.closingElement === null && nodeB.closingElement === null) ||
areEquivalent(nodeA.closingElement, nodeB.closingElement);
return (
areEquivalent(nodeA.openingElement, nodeB.openingElement) &&
areClosingElementsEqual &&
areAllEqual(nodeA.children, nodeB.children)
);
}
if (t.isJSXOpeningElement(nodeA) && t.isJSXOpeningElement(nodeB)) {
return (
areEquivalent(nodeA.name, nodeB.name) &&
areAllEqual(nodeA.attributes, nodeB.attributes)
);
}
if (t.isJSXClosingElement(nodeA) && t.isJSXClosingElement(nodeB)) {
return areEquivalent(nodeA.name, nodeB.name);
}
if (t.isJSXAttribute(nodeA) && t.isJSXAttribute(nodeB)) {
return (
areEquivalent(nodeA.name, nodeB.name) &&
areEquivalent(nodeA.value, nodeB.value)
);
}
if (t.isJSXIdentifier(nodeA) && t.isJSXIdentifier(nodeB)) {
return nodeA.name === nodeB.name;
}
// TS types
if (t.isTSTypeAnnotation(nodeA) && t.isTSTypeAnnotation(nodeB)) {
return nodeA.typeAnnotation.type === nodeB.typeAnnotation.type;
}
// Primitive values
return "value" in nodeA && "value" in nodeB && nodeA.value === nodeB.value;
}
function areAllEqual(
nodesA: (t.Node | null)[],
nodesB: (t.Node | null)[]
): boolean {
return (
nodesA.length === nodesB.length &&
nodesA.every((node, i) => areEquivalent(node, nodesB[i]))
);
}
function isTemplateExpression(node: t.Node): node is TemplateExpression {
return (
t.isIdentifier(node) ||
t.isCallExpression(node) ||
t.isMemberExpression(node)
);
}
type TemplateExpression = t.Identifier | t.CallExpression | t.MemberExpression;
function isInBranchedLogic(path: NodePath<t.ReturnStatement>) {
return path.getAncestry().some((path) => t.isIfStatement(path));
}
function isInAlternate(path: NodePath<t.IfStatement>): boolean {
const { parentPath } = path;
return t.isBlockStatement(parentPath)
? t.isIfStatement(parentPath.parent) &&
parentPath.parent.alternate === path.parent
: t.isIfStatement(parentPath.node) &&
parentPath.node.alternate === path.node;
}
function areOpposite(testA: t.Expression, testB: t.Expression): boolean {
if (!t.isBinaryExpression(testA)) return false;
if (!t.isBinaryExpression(testB)) return false;
const EQUALS_OPERATORS = ["==", "==="];
if (
EQUALS_OPERATORS.includes(testA.operator) &&
EQUALS_OPERATORS.includes(testB.operator)
) {
return (
areEquivalent(testA.left, testB.left) &&
!areEquivalent(testA.right, testB.right)
);
}
if (areOppositeOperators(testA.operator, testB.operator)) {
return (
areEquivalent(testA.left, testB.left) &&
areEquivalent(testA.right, testB.right)
);
}
return false;
}
const OPPOSITE_OPERATORS: t.BinaryExpression["operator"][][] = [
["===", "!=="],
["==", "!="],
[">", "<="],
[">", "<"],
[">=", "<"]
];
function areOppositeOperators(
operatorA: t.BinaryExpression["operator"],
operatorB: t.BinaryExpression["operator"]
): boolean {
return OPPOSITE_OPERATORS.some(
([left, right]) =>
(operatorA === left && operatorB === right) ||
(operatorA === right && operatorB === left)
);
}
function getOppositeOperator(
operator: t.BinaryExpression["operator"]
): t.BinaryExpression["operator"] {
let result: t.BinaryExpression["operator"] | undefined;
OPPOSITE_OPERATORS.forEach(([left, right]) => {
if (operator === left) result = right;
if (operator === right) result = left;
});
return result || operator;
}
function canBeShorthand(path: NodePath): path is NodePath<t.ObjectProperty> {
return (
t.isObjectProperty(path.node) &&
!path.node.computed &&
t.isIdentifier(path.node.key)
);
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2013 Twitter Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License. You may obtain
* a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.twitter.storehaus.cache
import org.scalatest.{Matchers, WordSpec}
class HHFilteredCacheTest extends WordSpec with Matchers {
def checkCache[K, V](pairs: Seq[(K, V)], m: Map[K, V])(
implicit cache: MutableCache[K, V]): Unit = {
pairs.foldLeft(cache)(_ += _)
val res = cache.iterator.toMap
cache.clear
res should equal(m)
}
"HHFilteredCache works properly" in {
val backingCache =
MutableCache.fromJMap[String, Int](new java.util.LinkedHashMap[String, Int])
implicit val cache = new HHFilteredCache[String, Int](
backingCache, HeavyHittersPercent(0.5f), WriteOperationUpdateFrequency(1),
RollOverFrequencyMS(10000000L))
checkCache(
Seq("a" -> 1, "b" -> 2),
Map("a" -> 1, "b" -> 2)
)
// Ensures the previous clear operations are running
// The output == input
checkCache(
Seq("c" -> 1, "d" -> 2),
Map("c" -> 1, "d" -> 2)
)
// Nothing above the 0.5 HH theshold
checkCache(
Seq("a" -> 1, "b" -> 2, "c" -> 3, "d" -> 3, "e" -> 3, "a" -> 1),
Map()
)
// Only b should be above the HH threshold
checkCache(
Seq("a" -> 1, "b" -> 2, "b" -> 3, "c" -> 1, "b" -> 3, "a" -> 1),
Map("b" -> 3)
)
}
}
| {
"pile_set_name": "Github"
} |
<?php
/**
* This is the template for generating the model class of a specified table.
* - $this: the ModelCode object
* - $tableName: the table name for this class (prefix is already removed if necessary)
* - $modelClass: the model class name
* - $columns: list of table columns (name=>CDbColumnSchema)
* - $labels: list of attribute labels (name=>label)
* - $rules: list of validation rules
* - $relations: list of relations (name=>relation declaration)
*/
?>
<?php echo "<?php\n"; ?>
/**
* This is the model class for table "<?php echo $tableName; ?>".
*
* The followings are the available columns in table '<?php echo $tableName; ?>':
<?php foreach($columns as $column): ?>
* @property <?php echo $column->type.' $'.$column->name."\n"; ?>
<?php endforeach; ?>
<?php if(!empty($relations)): ?>
*
* The followings are the available model relations:
<?php foreach($relations as $name=>$relation): ?>
* @property <?php
if (preg_match("~^array\(self::([^,]+), '([^']+)', '([^']+)'\)$~", $relation, $matches))
{
$relationType = $matches[1];
$relationModel = $matches[2];
switch($relationType){
case 'HAS_ONE':
echo $relationModel.' $'.$name."\n";
break;
case 'BELONGS_TO':
echo $relationModel.' $'.$name."\n";
break;
case 'HAS_MANY':
echo $relationModel.'[] $'.$name."\n";
break;
case 'MANY_MANY':
echo $relationModel.'[] $'.$name."\n";
break;
default:
echo 'mixed $'.$name."\n";
}
}
?>
<?php endforeach; ?>
<?php endif; ?>
*/
class <?php echo $modelClass; ?> extends <?php echo $this->baseClass."\n"; ?>
{
/**
* Returns the static model of the specified AR class.
* @param string $className active record class name.
* @return <?php echo $modelClass; ?> the static model class
*/
public static function model($className=__CLASS__)
{
return parent::model($className);
}
<?php if($connectionId!='db'):?>
/**
* @return CDbConnection database connection
*/
public function getDbConnection()
{
return Yii::app()-><?php echo $connectionId ?>;
}
<?php endif?>
/**
* @return string the associated database table name
*/
public function tableName()
{
return '<?php echo $tableName; ?>';
}
/**
* @return array validation rules for model attributes.
*/
public function rules()
{
// NOTE: you should only define rules for those attributes that
// will receive user inputs.
return array(
<?php foreach($rules as $rule): ?>
<?php echo $rule.",\n"; ?>
<?php endforeach; ?>
// The following rule is used by search().
// Please remove those attributes that should not be searched.
array('<?php echo implode(', ', array_keys($columns)); ?>', 'safe', 'on'=>'search'),
);
}
/**
* @return array relational rules.
*/
public function relations()
{
// NOTE: you may need to adjust the relation name and the related
// class name for the relations automatically generated below.
return array(
<?php foreach($relations as $name=>$relation): ?>
<?php echo "'$name' => $relation,\n"; ?>
<?php endforeach; ?>
);
}
/**
* @return array customized attribute labels (name=>label)
*/
public function attributeLabels()
{
return array(
<?php foreach($labels as $name=>$label): ?>
<?php echo "'$name' => '$label',\n"; ?>
<?php endforeach; ?>
);
}
/**
* Retrieves a list of models based on the current search/filter conditions.
* @return CActiveDataProvider the data provider that can return the models based on the search/filter conditions.
*/
public function search()
{
// Warning: Please modify the following code to remove attributes that
// should not be searched.
$criteria=new CDbCriteria;
<?php
foreach($columns as $name=>$column)
{
if($column->type==='string')
{
echo "\t\t\$criteria->compare('$name',\$this->$name,true);\n";
}
else
{
echo "\t\t\$criteria->compare('$name',\$this->$name);\n";
}
}
?>
return new CActiveDataProvider($this, array(
'criteria'=>$criteria,
));
}
} | {
"pile_set_name": "Github"
} |
# TODO: fully separate this from the rest of the project
# currently, it depends on Coda directly for configuration, and log providers depend on this directly
defmodule Cloud.Google do
@moduledoc "Google Cloud interface."
alias GoogleApi.Logging.V2, as: Logging
alias GoogleApi.PubSub.V1, as: PubSub
@type pubsub_conn :: PubSub.Connection.t()
@type logging_conn :: Logging.Connection.t()
defmodule ApiError do
defexception [:message, :error]
def message(%__MODULE__{message: message, error: error}) do
"#{message}: #{inspect(error)}"
end
end
defmodule Connections do
@moduledoc "Collection of connections for communicating with the Google Cloud API"
use Class
defclass(
pubsub: Cloud.Google.pubsub_conn(),
logging: Cloud.Google.logging_conn()
)
end
@spec connect :: Connections.t()
def connect do
{:ok, token} = Goth.Token.for_scope("https://www.googleapis.com/auth/cloud-platform")
%Connections{
pubsub: PubSub.Connection.new(token.token),
logging: Logging.Connection.new(token.token)
}
end
end
| {
"pile_set_name": "Github"
} |
/*
* Copyright (C) Lightbend Inc. <https://www.lightbend.com>
*/
package play.db;
/**
* A base for Java connection pool components.
*
* @see ConnectionPool
*/
public interface ConnectionPoolComponents {
ConnectionPool connectionPool();
}
| {
"pile_set_name": "Github"
} |
@media (max-width: 2000px)
.header
height: 50px
.page-content
&:after
max-width: 35%
h1
position: static
display: inline-block
margin-top: 0px
.nav
margin-top: 11px
.page-content
margin: 75px auto
max-width: 1600px
.col-1
width: 23.9%
padding: 10px 5px 0 0
.slogan
padding-left: 5px
padding-right: 5px
@media (max-width: 1440px)
.header
.page-content
&:after
max-width: 35%
.page-content
margin: 70px auto
max-width: 1500px
@media (max-width: 1200px)
.header
.page-content
&:after
max-width: 30%
.page-content
margin: 70px auto
max-width: 1150px
@media (max-width: 940px)
.header
height: 70px
.page-content
padding: 0 30px
&:after
border: none
h1
position: static
display: inline-block
margin-top: 19px
font-size: 1.6em
.nav
font-size: 1.0em
margin-top: 25px
.about
display: block
margin: 0 auto
.img-grid
padding-left: 5px
.col-1
width: 49%
padding: 10px 0 0 5px
.col-2
width: 100%
padding: 10px 10px 0 5px
.col-3
padding: 10px 10px 0 5px
.page-content
margin: 95px auto
@media (max-width: 620px)
.header
height: 80px
margin-bottom: 30px
h1
display: block
margin-top: 10px
font-size: 1.5em
text-align: center
.nav
display: block
text-align: center
float: none
margin-top: 10px
@media (max-width: 500px)
.header
height: 85px
margin-bottom: 40px
h1
display: block
margin: 0
padding-top: 10px
text-align: center
.nav
display: block
text-align: center
float: none
margin-top: 10px
.page-content
margin: 130px auto
@media (max-width: 460px)
.img-grid
margin-top: 15px
.col-1
width: 100%
padding: 10px 10px 0 5px | {
"pile_set_name": "Github"
} |
ActiveRecord::Schema.define :version => 0 do
create_table :seeded_models, :force => true do |t|
t.column :login, :string
t.column :first_name, :string
t.column :last_name, :string
t.column :title, :string
end
end
| {
"pile_set_name": "Github"
} |
{
"images" : [
{
"idiom" : "universal",
"scale" : "1x"
},
{
"idiom" : "universal",
"scale" : "2x"
},
{
"idiom" : "universal",
"filename" : "liveState@3x.png",
"scale" : "3x"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
}
} | {
"pile_set_name": "Github"
} |
// Copyright 2017 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !prod
// When included in the main build process, this file PERMANENTLY caches all
// HTTP requests. This is useful for quickly prototyping customisations to the
// bar without incurring HTTP costs or consuming quota on remote services.
// The cache is stored in ~/.cache/barista/http (using XDG_CACHE_HOME if set),
// and individual responses can be deleted if a fresher copy is needed.
// Once you are satisfied with the bar, simply omit this file (or build with
// the "prod" tag) to build a production version of the bar with no caching.
package main
import "net/http"
import "barista.run/testing/httpcache"
func init() {
http.DefaultTransport = httpcache.Wrap(http.DefaultTransport)
}
| {
"pile_set_name": "Github"
} |
# 写给不耐烦程序员的 JavaScript

> 原文:[JavaScript for impatient programmers](http://exploringjs.com/impatient-js/)
>
> 版本:Beta
>
> 协议:[CC BY-NC-SA 4.0](http://creativecommons.org/licenses/by-nc-sa/4.0/)
>
> 欢迎任何人参与和完善:一个人可以走的很快,但是一群人却可以走的更远。
* [在线阅读](https://impatient-js.apachecn.org)
* [ApacheCN 面试求职交流群 724187166](https://jq.qq.com/?_wv=1027&k=54ujcL3)
* [ApacheCN 学习资源](http://www.apachecn.org/)
## 贡献指南
项目当前处于校对阶段,请查看[贡献指南](CONTRIBUTING.md),并在[整体进度](https://github.com/apachecn/impatient-js-zh/issues/1)中领取任务。
> 请您勇敢地去翻译和改进翻译。虽然我们追求卓越,但我们并不要求您做到十全十美,因此请不要担心因为翻译上犯错——在大部分情况下,我们的服务器已经记录所有的翻译,因此您不必担心会因为您的失误遭到无法挽回的破坏。(改编自维基百科)
## 联系方式
### 负责人
* [飞龙](https://github.com/wizardforcel): 562826179
### 其他
* 认领翻译和项目进度-地址: <https://github.com/apachecn/impatient-js-zh/issues/1>
* 在我们的 [apachecn/impatient-js-zh](https://github.com/apachecn/impatient-js-zh) github 上提 issue.
* 发邮件到 Email: `apachecn@163.com`.
* 在我们的 [组织学习交流群](http://www.apachecn.org/organization/348.html) 中联系群主/管理员即可.
## 赞助我们

| {
"pile_set_name": "Github"
} |
package arping
import (
"net"
"syscall"
"time"
)
var sock int
var toSockaddr syscall.SockaddrLinklayer
func initialize(iface net.Interface) error {
toSockaddr = syscall.SockaddrLinklayer{Ifindex: iface.Index}
// 1544 = htons(ETH_P_ARP)
const proto = 1544
var err error
sock, err = syscall.Socket(syscall.AF_PACKET, syscall.SOCK_RAW, proto)
return err
}
func send(request arpDatagram) (time.Time, error) {
return time.Now(), syscall.Sendto(sock, request.MarshalWithEthernetHeader(), 0, &toSockaddr)
}
func receive() (arpDatagram, time.Time, error) {
buffer := make([]byte, 128)
n, _, err := syscall.Recvfrom(sock, buffer, 0)
if err != nil {
return arpDatagram{}, time.Now(), err
}
// skip 14 bytes ethernet header
return parseArpDatagram(buffer[14:n]), time.Now(), nil
}
func deinitialize() error {
return syscall.Close(sock)
}
| {
"pile_set_name": "Github"
} |
// Copyright 2014 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// This file has been auto-generated by code_generator_v8.py. DO NOT MODIFY!
#ifndef V8FileWriterCallback_h
#define V8FileWriterCallback_h
#include "bindings/core/v8/ActiveDOMCallback.h"
#include "bindings/core/v8/DOMWrapperWorld.h"
#include "bindings/core/v8/ScopedPersistent.h"
#include "modules/ModulesExport.h"
#include "modules/filesystem/FileWriterCallback.h"
namespace blink {
class V8FileWriterCallback final : public FileWriterCallback, public ActiveDOMCallback {
WILL_BE_USING_GARBAGE_COLLECTED_MIXIN(V8FileWriterCallback);
public:
static V8FileWriterCallback* create(v8::Local<v8::Function> callback, ScriptState* scriptState)
{
return new V8FileWriterCallback(callback, scriptState);
}
~V8FileWriterCallback() override;
DECLARE_VIRTUAL_TRACE();
void handleEvent(FileWriter* fileWriter) override;
private:
MODULES_EXPORT V8FileWriterCallback(v8::Local<v8::Function>, ScriptState*);
ScopedPersistent<v8::Function> m_callback;
RefPtr<ScriptState> m_scriptState;
};
}
#endif // V8FileWriterCallback_h
| {
"pile_set_name": "Github"
} |
//
// Copyright 2013 Pixar
//
// Licensed under the Apache License, Version 2.0 (the "Apache License")
// with the following modification; you may not use this file except in
// compliance with the Apache License and the following modification to it:
// Section 6. Trademarks. is deleted and replaced with:
//
// 6. Trademarks. This License does not grant permission to use the trade
// names, trademarks, service marks, or product names of the Licensor
// and its affiliates, except as required to comply with Section 4(c) of
// the License and to reproduce the content of the NOTICE file.
//
// You may obtain a copy of the Apache License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the Apache License with the above modification is
// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the Apache License for the specific
// language governing permissions and limitations under the Apache License.
//
#ifndef OPENSUBDIV3_OSD_MTL_MESH_H
#define OPENSUBDIV3_OSD_MTL_MESH_H
#include "../version.h"
#include "../osd/mesh.h"
#include "../osd/mtlPatchTable.h"
namespace OpenSubdiv {
namespace OPENSUBDIV_VERSION {
namespace Osd {
typedef MeshInterface<MTLPatchTable> MTLMeshInterface;
} // end namespace Osd
} // end namespace OPENSUBDIV_VERSION
using namespace OPENSUBDIV_VERSION;
} // end namespace OpenSubdiv
#endif // OPENSUBDIV3_OSD_MTL_MESH_H
| {
"pile_set_name": "Github"
} |
#include <stdlib.h>
#include <stdio.h>
#include "uthash.h"
typedef struct hs_t {
int id;
int tag;
UT_hash_handle hh;
} hs_t;
static void pr(hs_t **hdpp)
{
hs_t *el, *tmp, *hdp = *hdpp;
HASH_ITER(hh, hdp, el, tmp) {
printf("id %d, tag %d\n",el->id,el->tag);
}
}
int main()
{
hs_t *hs_head=NULL, *tmp, *replaced=NULL;
tmp = (hs_t*)malloc(sizeof(hs_t));
if (tmp == NULL) {
exit(-1);
}
tmp->id = 10;
tmp->tag = 100;
HASH_REPLACE_INT(hs_head,id,tmp,replaced);
if(replaced == NULL) {
printf("added %d %d\n",tmp->id,tmp->tag);
} else {
printf("ERROR, ended up replacing a value, replaced: %p\n",(void*)replaced);
}
pr(&hs_head);
tmp = (hs_t*)malloc(sizeof(hs_t));
if (tmp == NULL) {
exit(-1);
}
tmp->id=11;
tmp->tag = 101;
HASH_REPLACE_INT(hs_head,id,tmp,replaced);
if(replaced == NULL) {
printf("added %d %d\n",tmp->id,tmp->tag);
} else {
printf("ERROR, ended up replacing a value, replaced: %p\n",(void*)replaced);
}
pr(&hs_head);
tmp = (hs_t*)malloc(sizeof(hs_t));
if (tmp == NULL) {
exit(-1);
}
tmp->id=11;
tmp->tag = 102;
HASH_REPLACE_INT(hs_head,id,tmp,replaced);
if(replaced == NULL) {
printf("ERROR, exected to replace a value with key: %d\n",tmp->id);
} else {
printf("replaced %d that had tag %d with tag %d\n",tmp->id,replaced->tag,tmp->tag);
}
pr(&hs_head);
return 0;
}
| {
"pile_set_name": "Github"
} |
// (C) Copyright John Maddock 2001.
// (C) Copyright Jens Maurer 2001 - 2003.
// (C) Copyright Peter Dimov 2002.
// (C) Copyright Aleksey Gurtovoy 2002 - 2003.
// (C) Copyright David Abrahams 2002.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
// Sun C++ compiler setup:
# if __SUNPRO_CC <= 0x500
# define BOOST_NO_MEMBER_TEMPLATES
# define BOOST_NO_FUNCTION_TEMPLATE_ORDERING
# endif
# if (__SUNPRO_CC <= 0x520)
//
// Sunpro 5.2 and earler:
//
// although sunpro 5.2 supports the syntax for
// inline initialization it often gets the value
// wrong, especially where the value is computed
// from other constants (J Maddock 6th May 2001)
# define BOOST_NO_INCLASS_MEMBER_INITIALIZATION
// Although sunpro 5.2 supports the syntax for
// partial specialization, it often seems to
// bind to the wrong specialization. Better
// to disable it until suppport becomes more stable
// (J Maddock 6th May 2001).
# define BOOST_NO_TEMPLATE_PARTIAL_SPECIALIZATION
# endif
# if (__SUNPRO_CC <= 0x530)
// Requesting debug info (-g) with Boost.Python results
// in an internal compiler error for "static const"
// initialized in-class.
// >> Assertion: (../links/dbg_cstabs.cc, line 611)
// while processing ../test.cpp at line 0.
// (Jens Maurer according to Gottfried Ganssauge 04 Mar 2002)
# define BOOST_NO_INCLASS_MEMBER_INITIALIZATION
// SunPro 5.3 has better support for partial specialization,
// but breaks when compiling std::less<shared_ptr<T> >
// (Jens Maurer 4 Nov 2001).
// std::less specialization fixed as reported by George
// Heintzelman; partial specialization re-enabled
// (Peter Dimov 17 Jan 2002)
//# define BOOST_NO_TEMPLATE_PARTIAL_SPECIALIZATION
// integral constant expressions with 64 bit numbers fail
# define BOOST_NO_INTEGRAL_INT64_T
# endif
# if (__SUNPRO_CC < 0x570)
# define BOOST_NO_TEMPLATE_TEMPLATES
// see http://lists.boost.org/MailArchives/boost/msg47184.php
// and http://lists.boost.org/MailArchives/boost/msg47220.php
# define BOOST_NO_INCLASS_MEMBER_INITIALIZATION
# define BOOST_NO_SFINAE
# define BOOST_NO_ARRAY_TYPE_SPECIALIZATIONS
# endif
# if (__SUNPRO_CC <= 0x580)
# define BOOST_NO_IS_ABSTRACT
# endif
# if (__SUNPRO_CC <= 0x5100)
// Sun 5.10 may not correctly value-initialize objects of
// some user defined types, as was reported in April 2010
// (CR 6947016), and confirmed by Steve Clamage.
// (Niels Dekker, LKEB, May 2010).
# define BOOST_NO_COMPLETE_VALUE_INITIALIZATION
# endif
//
// Dynamic shared object (DSO) and dynamic-link library (DLL) support
//
#if __SUNPRO_CC > 0x500
# define BOOST_SYMBOL_EXPORT __global
# define BOOST_SYMBOL_IMPORT __global
# define BOOST_SYMBOL_VISIBLE __global
#endif
//
// Issues that effect all known versions:
//
#define BOOST_NO_TWO_PHASE_NAME_LOOKUP
#define BOOST_NO_ADL_BARRIER
//
// C++0x features
//
# define BOOST_HAS_LONG_LONG
#define BOOST_NO_AUTO_DECLARATIONS
#define BOOST_NO_AUTO_MULTIDECLARATIONS
#define BOOST_NO_CHAR16_T
#define BOOST_NO_CHAR32_T
#define BOOST_NO_CONSTEXPR
#define BOOST_NO_DECLTYPE
#define BOOST_NO_DEFAULTED_FUNCTIONS
#define BOOST_NO_DELETED_FUNCTIONS
#define BOOST_NO_EXPLICIT_CONVERSION_OPERATORS
#define BOOST_NO_EXTERN_TEMPLATE
#define BOOST_NO_FUNCTION_TEMPLATE_DEFAULT_ARGS
#define BOOST_NO_INITIALIZER_LISTS
#define BOOST_NO_LAMBDAS
#define BOOST_NO_NOEXCEPT
#define BOOST_NO_NULLPTR
#define BOOST_NO_RAW_LITERALS
#define BOOST_NO_RVALUE_REFERENCES
#define BOOST_NO_SCOPED_ENUMS
#define BOOST_NO_SFINAE_EXPR
#define BOOST_NO_STATIC_ASSERT
#define BOOST_NO_TEMPLATE_ALIASES
#define BOOST_NO_UNICODE_LITERALS
#define BOOST_NO_VARIADIC_TEMPLATES
#define BOOST_NO_VARIADIC_MACROS
#define BOOST_NO_UNIFIED_INITIALIZATION_SYNTAX
//
// Version
//
#define BOOST_COMPILER "Sun compiler version " BOOST_STRINGIZE(__SUNPRO_CC)
//
// versions check:
// we don't support sunpro prior to version 4:
#if __SUNPRO_CC < 0x400
#error "Compiler not supported or configured - please reconfigure"
#endif
//
// last known and checked version is 0x590:
#if (__SUNPRO_CC > 0x590)
# if defined(BOOST_ASSERT_CONFIG)
# error "Unknown compiler version - please run the configure tests and report the results"
# endif
#endif
| {
"pile_set_name": "Github"
} |
/**
* SyntaxHighlighter
* http://alexgorbatchev.com/SyntaxHighlighter
*
* SyntaxHighlighter is donationware. If you are using it, please donate.
* http://alexgorbatchev.com/SyntaxHighlighter/donate.html
*
* @version
* 3.0.83 (July 02 2010)
*
* @copyright
* Copyright (C) 2004-2010 Alex Gorbatchev.
*
* @license
* Dual licensed under the MIT and GPL licenses.
*/
;(function()
{
// CommonJS
typeof(require) != 'undefined' ? SyntaxHighlighter = require('shCore').SyntaxHighlighter : null;
function Brush()
{
this.regexList = [
{ regex: /^\+\+\+.*$/gm, css: 'color2' },
{ regex: /^\-\-\-.*$/gm, css: 'color2' },
{ regex: /^\s.*$/gm, css: 'color1' },
{ regex: /^@@.*@@$/gm, css: 'variable' },
{ regex: /^\+[^\+]{1}.*$/gm, css: 'string' },
{ regex: /^\-[^\-]{1}.*$/gm, css: 'comments' }
];
};
Brush.prototype = new SyntaxHighlighter.Highlighter();
Brush.aliases = ['diff', 'patch'];
SyntaxHighlighter.brushes.Diff = Brush;
// CommonJS
typeof(exports) != 'undefined' ? exports.Brush = Brush : null;
})();
| {
"pile_set_name": "Github"
} |
/**
* cbpAnimatedHeader.js v1.0.0
* http://www.codrops.com
*
* Licensed under the MIT license.
* http://www.opensource.org/licenses/mit-license.php
*
* Copyright 2013, Codrops
* http://www.codrops.com
*/
var cbpAnimatedHeader = (function() {
var docElem = document.documentElement,
header = document.querySelector( '.navbar-default' ),
didScroll = false,
changeHeaderOn = 300;
function init() {
window.addEventListener( 'scroll', function( event ) {
if( !didScroll ) {
didScroll = true;
setTimeout( scrollPage, 250 );
}
}, false );
}
function scrollPage() {
var sy = scrollY();
if ( sy >= changeHeaderOn ) {
classie.add( header, 'navbar-shrink' );
}
else {
classie.remove( header, 'navbar-shrink' );
}
didScroll = false;
}
function scrollY() {
return window.pageYOffset || docElem.scrollTop;
}
init();
})(); | {
"pile_set_name": "Github"
} |
/* from asm/termbits.h */
#define TARGET_NCCS 19
struct target_termios {
unsigned int c_iflag; /* input mode flags */
unsigned int c_oflag; /* output mode flags */
unsigned int c_cflag; /* control mode flags */
unsigned int c_lflag; /* local mode flags */
unsigned char c_line; /* line discipline */
unsigned char c_cc[TARGET_NCCS]; /* control characters */
};
/* c_iflag bits */
#define TARGET_IGNBRK 0000001
#define TARGET_BRKINT 0000002
#define TARGET_IGNPAR 0000004
#define TARGET_PARMRK 0000010
#define TARGET_INPCK 0000020
#define TARGET_ISTRIP 0000040
#define TARGET_INLCR 0000100
#define TARGET_IGNCR 0000200
#define TARGET_ICRNL 0000400
#define TARGET_IUCLC 0001000
#define TARGET_IXON 0002000
#define TARGET_IXANY 0004000
#define TARGET_IXOFF 0010000
#define TARGET_IMAXBEL 0020000
#define TARGET_IUTF8 0040000
/* c_oflag bits */
#define TARGET_OPOST 0000001
#define TARGET_OLCUC 0000002
#define TARGET_ONLCR 0000004
#define TARGET_OCRNL 0000010
#define TARGET_ONOCR 0000020
#define TARGET_ONLRET 0000040
#define TARGET_OFILL 0000100
#define TARGET_OFDEL 0000200
#define TARGET_NLDLY 0000400
#define TARGET_NL0 0000000
#define TARGET_NL1 0000400
#define TARGET_CRDLY 0003000
#define TARGET_CR0 0000000
#define TARGET_CR1 0001000
#define TARGET_CR2 0002000
#define TARGET_CR3 0003000
#define TARGET_TABDLY 0014000
#define TARGET_TAB0 0000000
#define TARGET_TAB1 0004000
#define TARGET_TAB2 0010000
#define TARGET_TAB3 0014000
#define TARGET_XTABS 0014000
#define TARGET_BSDLY 0020000
#define TARGET_BS0 0000000
#define TARGET_BS1 0020000
#define TARGET_VTDLY 0040000
#define TARGET_VT0 0000000
#define TARGET_VT1 0040000
#define TARGET_FFDLY 0100000
#define TARGET_FF0 0000000
#define TARGET_FF1 0100000
/* c_cflag bit meaning */
#define TARGET_CBAUD 0010017
#define TARGET_B0 0000000 /* hang up */
#define TARGET_B50 0000001
#define TARGET_B75 0000002
#define TARGET_B110 0000003
#define TARGET_B134 0000004
#define TARGET_B150 0000005
#define TARGET_B200 0000006
#define TARGET_B300 0000007
#define TARGET_B600 0000010
#define TARGET_B1200 0000011
#define TARGET_B1800 0000012
#define TARGET_B2400 0000013
#define TARGET_B4800 0000014
#define TARGET_B9600 0000015
#define TARGET_B19200 0000016
#define TARGET_B38400 0000017
#define TARGET_EXTA B19200
#define TARGET_EXTB B38400
#define TARGET_CSIZE 0000060
#define TARGET_CS5 0000000
#define TARGET_CS6 0000020
#define TARGET_CS7 0000040
#define TARGET_CS8 0000060
#define TARGET_CSTOPB 0000100
#define TARGET_CREAD 0000200
#define TARGET_PARENB 0000400
#define TARGET_PARODD 0001000
#define TARGET_HUPCL 0002000
#define TARGET_CLOCAL 0004000
#define TARGET_CBAUDEX 0010000
#define TARGET_B57600 0010001
#define TARGET_B115200 0010002
#define TARGET_B230400 0010003
#define TARGET_B460800 0010004
#define TARGET_B500000 0010005
#define TARGET_B576000 0010006
#define TARGET_B921600 0010007
#define TARGET_B1000000 0010010
#define TARGET_B1152000 0010011
#define TARGET_B1500000 0010012
#define TARGET_B2000000 0010013
#define TARGET_B2500000 0010014
#define TARGET_B3000000 0010015
#define TARGET_B3500000 0010016
#define TARGET_B4000000 0010017
#define TARGET_CIBAUD 002003600000 /* input baud rate (not used) */
#define TARGET_CMSPAR 010000000000 /* mark or space (stick) parity */
#define TARGET_CRTSCTS 020000000000 /* flow control */
/* c_lflag bits */
#define TARGET_ISIG 0000001
#define TARGET_ICANON 0000002
#define TARGET_XCASE 0000004
#define TARGET_ECHO 0000010
#define TARGET_ECHOE 0000020
#define TARGET_ECHOK 0000040
#define TARGET_ECHONL 0000100
#define TARGET_NOFLSH 0000200
#define TARGET_TOSTOP 0000400
#define TARGET_ECHOCTL 0001000
#define TARGET_ECHOPRT 0002000
#define TARGET_ECHOKE 0004000
#define TARGET_FLUSHO 0010000
#define TARGET_PENDIN 0040000
#define TARGET_IEXTEN 0100000
/* c_cc character offsets */
#define TARGET_VINTR 0
#define TARGET_VQUIT 1
#define TARGET_VERASE 2
#define TARGET_VKILL 3
#define TARGET_VEOF 4
#define TARGET_VTIME 5
#define TARGET_VMIN 6
#define TARGET_VSWTC 7
#define TARGET_VSTART 8
#define TARGET_VSTOP 9
#define TARGET_VSUSP 10
#define TARGET_VEOL 11
#define TARGET_VREPRINT 12
#define TARGET_VDISCARD 13
#define TARGET_VWERASE 14
#define TARGET_VLNEXT 15
#define TARGET_VEOL2 16
/* ioctls */
#define TARGET_TCGETS 0x5401
#define TARGET_TCSETS 0x5402
#define TARGET_TCSETSW 0x5403
#define TARGET_TCSETSF 0x5404
#define TARGET_TCGETA 0x5405
#define TARGET_TCSETA 0x5406
#define TARGET_TCSETAW 0x5407
#define TARGET_TCSETAF 0x5408
#define TARGET_TCSBRK 0x5409
#define TARGET_TCXONC 0x540A
#define TARGET_TCFLSH 0x540B
#define TARGET_TIOCEXCL 0x540C
#define TARGET_TIOCNXCL 0x540D
#define TARGET_TIOCSCTTY 0x540E
#define TARGET_TIOCGPGRP 0x540F
#define TARGET_TIOCSPGRP 0x5410
#define TARGET_TIOCOUTQ 0x5411
#define TARGET_TIOCSTI 0x5412
#define TARGET_TIOCGWINSZ 0x5413
#define TARGET_TIOCSWINSZ 0x5414
#define TARGET_TIOCMGET 0x5415
#define TARGET_TIOCMBIS 0x5416
#define TARGET_TIOCMBIC 0x5417
#define TARGET_TIOCMSET 0x5418
#define TARGET_TIOCGSOFTCAR 0x5419
#define TARGET_TIOCSSOFTCAR 0x541A
#define TARGET_FIONREAD 0x541B
#define TARGET_TIOCINQ TARGET_FIONREAD
#define TARGET_TIOCLINUX 0x541C
#define TARGET_TIOCCONS 0x541D
#define TARGET_TIOCGSERIAL 0x541E
#define TARGET_TIOCSSERIAL 0x541F
#define TARGET_TIOCPKT 0x5420
#define TARGET_FIONBIO 0x5421
#define TARGET_TIOCNOTTY 0x5422
#define TARGET_TIOCSETD 0x5423
#define TARGET_TIOCGETD 0x5424
#define TARGET_TCSBRKP 0x5425 /* Needed for POSIX tcsendbreak() */
#define TARGET_TIOCTTYGSTRUCT 0x5426 /* For debugging only */
#define TARGET_TIOCSBRK 0x5427 /* BSD compatibility */
#define TARGET_TIOCCBRK 0x5428 /* BSD compatibility */
#define TARGET_TIOCGSID 0x5429 /* Return the session ID of FD */
#define TARGET_TIOCGPTN TARGET_IOR('T',0x30, unsigned int) /* Get Pty Number (of pty-mux device) */
#define TARGET_TIOCSPTLCK TARGET_IOW('T',0x31, int) /* Lock/unlock Pty */
#define TARGET_FIONCLEX 0x5450 /* these numbers need to be adjusted. */
#define TARGET_FIOCLEX 0x5451
#define TARGET_FIOASYNC 0x5452
#define TARGET_TIOCSERCONFIG 0x5453
#define TARGET_TIOCSERGWILD 0x5454
#define TARGET_TIOCSERSWILD 0x5455
#define TARGET_TIOCGLCKTRMIOS 0x5456
#define TARGET_TIOCSLCKTRMIOS 0x5457
#define TARGET_TIOCSERGSTRUCT 0x5458 /* For debugging only */
#define TARGET_TIOCSERGETLSR 0x5459 /* Get line status register */
#define TARGET_TIOCSERGETMULTI 0x545A /* Get multiport config */
#define TARGET_TIOCSERSETMULTI 0x545B /* Set multiport config */
#define TARGET_TIOCMIWAIT 0x545C /* wait for a change on serial input line(s) */
#define TARGET_TIOCGICOUNT 0x545D /* read serial port inline interrupt counts */
#define TARGET_TIOCGHAYESESP 0x545E /* Get Hayes ESP configuration */
#define TARGET_TIOCSHAYESESP 0x545F /* Set Hayes ESP configuration */
/* Used for packet mode */
#define TARGET_TIOCPKT_DATA 0
#define TARGET_TIOCPKT_FLUSHREAD 1
#define TARGET_TIOCPKT_FLUSHWRITE 2
#define TARGET_TIOCPKT_STOP 4
#define TARGET_TIOCPKT_START 8
#define TARGET_TIOCPKT_NOSTOP 16
#define TARGET_TIOCPKT_DOSTOP 32
#define TARGET_TIOCSER_TEMT 0x01 /* Transmitter physically empty */
| {
"pile_set_name": "Github"
} |
file (GLOB SOURCE_FILES *.cpp)
file (GLOB HEADER_FILES *.hpp)
init_target (iostream_server)
build_executable (${TARGET_NAME} ${SOURCE_FILES} ${HEADER_FILES})
link_boost ()
final_target ()
set_target_properties(${TARGET_NAME} PROPERTIES FOLDER "examples")
| {
"pile_set_name": "Github"
} |
obj-$(CONFIG_LOOPBACK_TARGET) += tcm_loop.o
| {
"pile_set_name": "Github"
} |
<?php
declare(strict_types=1);
namespace KejawenLab\Application\SemartHris\DataFixtures;
use Doctrine\Common\DataFixtures\DependentFixtureInterface;
use KejawenLab\Application\SemartHris\Entity\City;
/**
* @author Muhamad Surya Iksanudin <surya.iksanudin@gmail.com>
*/
class CityFixtures extends Fixture implements DependentFixtureInterface
{
/**
* @return array
*/
public function getDependencies()
{
return [RegionFixtures::class];
}
/**
* @return string
*/
protected function getFixtureFilePath(): string
{
return 'city.yaml';
}
/**
* @return mixed
*/
protected function createNew()
{
return new City();
}
/**
* @return string
*/
protected function getReferenceKey(): string
{
return 'city';
}
}
| {
"pile_set_name": "Github"
} |
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: lr-model-meta.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='lr-model-meta.proto',
package='com.webank.ai.fate.core.mlmodel.buffer',
syntax='proto3',
serialized_options=_b('B\020LRModelMetaProto'),
serialized_pb=_b('\n\x13lr-model-meta.proto\x12&com.webank.ai.fate.core.mlmodel.buffer\"\x81\x02\n\x0bLRModelMeta\x12\x0f\n\x07penalty\x18\x01 \x01(\t\x12\x0b\n\x03tol\x18\x02 \x01(\x01\x12\r\n\x05\x61lpha\x18\x03 \x01(\x01\x12\x11\n\toptimizer\x18\x04 \x01(\t\x12\x14\n\x0cparty_weight\x18\x05 \x01(\x01\x12\x12\n\nbatch_size\x18\x06 \x01(\x03\x12\x15\n\rlearning_rate\x18\x07 \x01(\x01\x12\x10\n\x08max_iter\x18\x08 \x01(\x03\x12\x12\n\nearly_stop\x18\t \x01(\t\x12\x1a\n\x12re_encrypt_batches\x18\n \x01(\x03\x12\x15\n\rfit_intercept\x18\x0b \x01(\x08\x12\x18\n\x10need_one_vs_rest\x18\x0c \x01(\x08\x42\x12\x42\x10LRModelMetaProtob\x06proto3')
)
_LRMODELMETA = _descriptor.Descriptor(
name='LRModelMeta',
full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='penalty', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.penalty', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tol', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.tol', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='alpha', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.alpha', index=2,
number=3, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='optimizer', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.optimizer', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='party_weight', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.party_weight', index=4,
number=5, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='batch_size', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.batch_size', index=5,
number=6, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='learning_rate', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.learning_rate', index=6,
number=7, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='max_iter', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.max_iter', index=7,
number=8, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='early_stop', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.early_stop', index=8,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='re_encrypt_batches', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.re_encrypt_batches', index=9,
number=10, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='fit_intercept', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.fit_intercept', index=10,
number=11, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='need_one_vs_rest', full_name='com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta.need_one_vs_rest', index=11,
number=12, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=64,
serialized_end=321,
)
DESCRIPTOR.message_types_by_name['LRModelMeta'] = _LRMODELMETA
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
LRModelMeta = _reflection.GeneratedProtocolMessageType('LRModelMeta', (_message.Message,), dict(
DESCRIPTOR = _LRMODELMETA,
__module__ = 'lr_model_meta_pb2'
# @@protoc_insertion_point(class_scope:com.webank.ai.fate.core.mlmodel.buffer.LRModelMeta)
))
_sym_db.RegisterMessage(LRModelMeta)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: 326d78bb223bf4da9ab216b6e6dca845
timeCreated: 1475786653
licenseType: Pro
MonoImporter:
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:
| {
"pile_set_name": "Github"
} |
/////////////////////////////////////////////////////////////////////////////
//
// (C) Copyright Ion Gaztanaga 2006-2013
//
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
//
// See http://www.boost.org/libs/intrusive for documentation.
//
/////////////////////////////////////////////////////////////////////////////
#ifndef BOOST_INTRUSIVE_MEMBER_VALUE_TRAITS_HPP
#define BOOST_INTRUSIVE_MEMBER_VALUE_TRAITS_HPP
#include <boost/intrusive/detail/config_begin.hpp>
#include <boost/intrusive/intrusive_fwd.hpp>
#include <boost/intrusive/link_mode.hpp>
#include <boost/intrusive/detail/parent_from_member.hpp>
#include <boost/intrusive/detail/to_raw_pointer.hpp>
#include <boost/intrusive/pointer_traits.hpp>
#if defined(BOOST_HAS_PRAGMA_ONCE)
# pragma once
#endif
namespace boost {
namespace intrusive {
//!This value traits template is used to create value traits
//!from user defined node traits where value_traits::value_type will
//!store a node_traits::node
template< class T, class NodeTraits
, typename NodeTraits::node T::* PtrToMember
, link_mode_type LinkMode
#ifdef BOOST_INTRUSIVE_DOXYGEN_INVOKED
= safe_link
#endif
>
struct member_value_traits
{
public:
typedef NodeTraits node_traits;
typedef T value_type;
typedef typename node_traits::node node;
typedef typename node_traits::node_ptr node_ptr;
typedef typename node_traits::const_node_ptr const_node_ptr;
typedef pointer_traits<node_ptr> node_ptr_traits;
typedef typename pointer_traits<node_ptr>::template
rebind_pointer<T>::type pointer;
typedef typename pointer_traits<node_ptr>::template
rebind_pointer<const T>::type const_pointer;
//typedef typename pointer_traits<pointer>::reference reference;
//typedef typename pointer_traits<const_pointer>::reference const_reference;
typedef value_type & reference;
typedef const value_type & const_reference;
static const link_mode_type link_mode = LinkMode;
BOOST_INTRUSIVE_FORCEINLINE static node_ptr to_node_ptr(reference value)
{ return pointer_traits<node_ptr>::pointer_to(value.*PtrToMember); }
BOOST_INTRUSIVE_FORCEINLINE static const_node_ptr to_node_ptr(const_reference value)
{ return pointer_traits<const_node_ptr>::pointer_to(value.*PtrToMember); }
BOOST_INTRUSIVE_FORCEINLINE static pointer to_value_ptr(const node_ptr &n)
{
return pointer_traits<pointer>::pointer_to(*detail::parent_from_member<value_type, node>
(boost::intrusive::detail::to_raw_pointer(n), PtrToMember));
}
BOOST_INTRUSIVE_FORCEINLINE static const_pointer to_value_ptr(const const_node_ptr &n)
{
return pointer_traits<const_pointer>::pointer_to(*detail::parent_from_member<value_type, node>
(boost::intrusive::detail::to_raw_pointer(n), PtrToMember));
}
};
} //namespace intrusive
} //namespace boost
#include <boost/intrusive/detail/config_end.hpp>
#endif //BOOST_INTRUSIVE_MEMBER_VALUE_TRAITS_HPP
| {
"pile_set_name": "Github"
} |
/***************************************************************************************************
* Copyright (c) 2017-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are permitted
* provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright notice, this list of
* conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the names of its contributors may be used
* to endorse or promote products derived from this software without specific prior written
* permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
* FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TOR (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
**************************************************************************************************/
/* \file
\brief Execution environment
*/
#include <iostream>
#include <stdexcept>
// Profiler includes
#include "cutlass_profiler.h"
#include "gemm_operation_profiler.h"
#include "sparse_gemm_operation_profiler.h"
/////////////////////////////////////////////////////////////////////////////////////////////////
namespace cutlass {
namespace profiler {
/////////////////////////////////////////////////////////////////////////////////////////////////
CutlassProfiler::CutlassProfiler(
Options const &options
):
options_(options) {
operation_profilers_.emplace_back(new GemmOperationProfiler(options));
operation_profilers_.emplace_back(new SparseGemmOperationProfiler(options));
}
CutlassProfiler::~CutlassProfiler() {
}
/////////////////////////////////////////////////////////////////////////////////////////////////
/// Execute the program
int CutlassProfiler::operator()() {
if (options_.about.help) {
if (options_.operation_kind == library::OperationKind::kInvalid) {
print_usage_(std::cout);
}
else {
for (auto & profiler : operation_profilers_) {
if (profiler->kind() == options_.operation_kind) {
profiler->print_usage(std::cout);
profiler->print_examples(std::cout);
return 0;
}
}
}
return 0;
}
else if (options_.about.version) {
options_.about.print_version(std::cout);
std::cout << std::endl;
return 0;
}
else if (options_.about.device_info) {
options_.device.print_device_info(std::cout);
return 0;
}
if (options_.execution_mode == ExecutionMode::kProfile ||
options_.execution_mode == ExecutionMode::kDryRun ||
options_.execution_mode == ExecutionMode::kTrace) {
// Profiles all operations
profile_();
}
else if (options_.execution_mode == ExecutionMode::kEnumerate) {
// Enumerates all operations
enumerate_();
}
return 0;
}
/////////////////////////////////////////////////////////////////////////////////////////////////
/// Enumerates all operations
void CutlassProfiler::enumerate_() {
}
/// Profiles all operations
int CutlassProfiler::profile_() {
int result = 0;
DeviceContext device_context;
// For all profilers
for (auto & profiler : operation_profilers_) {
if (options_.operation_kind == library::OperationKind::kInvalid ||
options_.operation_kind == profiler->kind()) {
result = profiler->profile_all(options_, library::Singleton::get().manifest, device_context);
if (result) {
return result;
}
}
}
return result;
}
/////////////////////////////////////////////////////////////////////////////////////////////////
/// Prints all options
void CutlassProfiler::print_usage_(std::ostream &out) {
options_.print_usage(out);
out << "\nOperations:\n\n";
// For all profilers
for (auto & profiler : operation_profilers_) {
std::string kind_str = library::to_string(profiler->kind());
size_t kAlignment = 40;
size_t columns = 0;
if (kind_str.size() < kAlignment) {
columns = kAlignment - kind_str.size();
}
out << " " << kind_str << std::string(columns, ' ') << profiler->description() << "\n";
}
out << "\n\nFor details about a particular function, specify the function name with --help.\n\nExample:\n\n"
<< " $ cutlass_profiler --operation=Gemm --help\n\n"
;
}
/// Prints usage
void CutlassProfiler::print_options_(std::ostream &out) {
options_.print_options(out);
}
/////////////////////////////////////////////////////////////////////////////////////////////////
/// Initializes the CUDA device
void CutlassProfiler::initialize_device_() {
cudaError_t result = cudaSetDevice(options_.device.device);
if (result != cudaSuccess) {
std::cerr << "Failed to set device.";
throw std::runtime_error("Failed to set device");
}
}
/////////////////////////////////////////////////////////////////////////////////////////////////
} // namespace profiler
} // namespace cutlass
/////////////////////////////////////////////////////////////////////////////////////////////////
| {
"pile_set_name": "Github"
} |
package assertion
import (
"fmt"
"reflect"
"github.com/onsi/gomega/types"
)
type Assertion struct {
actualInput interface{}
failWrapper *types.GomegaFailWrapper
offset int
extra []interface{}
}
func New(actualInput interface{}, failWrapper *types.GomegaFailWrapper, offset int, extra ...interface{}) *Assertion {
return &Assertion{
actualInput: actualInput,
failWrapper: failWrapper,
offset: offset,
extra: extra,
}
}
func (assertion *Assertion) Should(matcher types.GomegaMatcher, optionalDescription ...interface{}) bool {
assertion.failWrapper.TWithHelper.Helper()
return assertion.vetExtras(optionalDescription...) && assertion.match(matcher, true, optionalDescription...)
}
func (assertion *Assertion) ShouldNot(matcher types.GomegaMatcher, optionalDescription ...interface{}) bool {
assertion.failWrapper.TWithHelper.Helper()
return assertion.vetExtras(optionalDescription...) && assertion.match(matcher, false, optionalDescription...)
}
func (assertion *Assertion) To(matcher types.GomegaMatcher, optionalDescription ...interface{}) bool {
assertion.failWrapper.TWithHelper.Helper()
return assertion.vetExtras(optionalDescription...) && assertion.match(matcher, true, optionalDescription...)
}
func (assertion *Assertion) ToNot(matcher types.GomegaMatcher, optionalDescription ...interface{}) bool {
assertion.failWrapper.TWithHelper.Helper()
return assertion.vetExtras(optionalDescription...) && assertion.match(matcher, false, optionalDescription...)
}
func (assertion *Assertion) NotTo(matcher types.GomegaMatcher, optionalDescription ...interface{}) bool {
assertion.failWrapper.TWithHelper.Helper()
return assertion.vetExtras(optionalDescription...) && assertion.match(matcher, false, optionalDescription...)
}
func (assertion *Assertion) buildDescription(optionalDescription ...interface{}) string {
switch len(optionalDescription) {
case 0:
return ""
case 1:
if describe, ok := optionalDescription[0].(func() string); ok {
return describe() + "\n"
}
}
return fmt.Sprintf(optionalDescription[0].(string), optionalDescription[1:]...) + "\n"
}
func (assertion *Assertion) match(matcher types.GomegaMatcher, desiredMatch bool, optionalDescription ...interface{}) bool {
matches, err := matcher.Match(assertion.actualInput)
assertion.failWrapper.TWithHelper.Helper()
if err != nil {
description := assertion.buildDescription(optionalDescription...)
assertion.failWrapper.Fail(description+err.Error(), 2+assertion.offset)
return false
}
if matches != desiredMatch {
var message string
if desiredMatch {
message = matcher.FailureMessage(assertion.actualInput)
} else {
message = matcher.NegatedFailureMessage(assertion.actualInput)
}
description := assertion.buildDescription(optionalDescription...)
assertion.failWrapper.Fail(description+message, 2+assertion.offset)
return false
}
return true
}
func (assertion *Assertion) vetExtras(optionalDescription ...interface{}) bool {
success, message := vetExtras(assertion.extra)
if success {
return true
}
description := assertion.buildDescription(optionalDescription...)
assertion.failWrapper.TWithHelper.Helper()
assertion.failWrapper.Fail(description+message, 2+assertion.offset)
return false
}
func vetExtras(extras []interface{}) (bool, string) {
for i, extra := range extras {
if extra != nil {
zeroValue := reflect.Zero(reflect.TypeOf(extra)).Interface()
if !reflect.DeepEqual(zeroValue, extra) {
message := fmt.Sprintf("Unexpected non-nil/non-zero extra argument at index %d:\n\t<%T>: %#v", i+1, extra, extra)
return false, message
}
}
}
return true, ""
}
| {
"pile_set_name": "Github"
} |
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "TicketDesk-SearchMonitor-Job",
"startTime": null,
"endTime": null,
"jobRecurrenceFrequency": null,
"interval": null,
"runMode": "Continuous"
} | {
"pile_set_name": "Github"
} |
/// @brief Include to use 2d array textures.
/// @file gli/texture2d_array.hpp
#pragma once
#include "texture2d.hpp"
namespace gli
{
/// 2d array texture
class texture2d_array : public texture
{
public:
typedef extent2d extent_type;
public:
/// Create an empty texture 2D array
texture2d_array();
/// Create a texture2d_array and allocate a new storage_linear
texture2d_array(
format_type Format,
extent_type const& Extent,
size_type Layers,
size_type Levels,
swizzles_type const& Swizzles = swizzles_type(SWIZZLE_RED, SWIZZLE_GREEN, SWIZZLE_BLUE, SWIZZLE_ALPHA));
/// Create a texture2d_array and allocate a new storage_linear with a complete mipmap chain
texture2d_array(
format_type Format,
extent_type const& Extent,
size_type Layers,
swizzles_type const& Swizzles = swizzles_type(SWIZZLE_RED, SWIZZLE_GREEN, SWIZZLE_BLUE, SWIZZLE_ALPHA));
/// Create a texture2d_array view with an existing storage_linear
explicit texture2d_array(
texture const& Texture);
/// Create a texture2d_array view with an existing storage_linear
texture2d_array(
texture const& Texture,
format_type Format,
size_type BaseLayer, size_type MaxLayer,
size_type BaseFace, size_type MaxFace,
size_type BaseLevel, size_type MaxLevel,
swizzles_type const& Swizzles = swizzles_type(SWIZZLE_RED, SWIZZLE_GREEN, SWIZZLE_BLUE, SWIZZLE_ALPHA));
/// Create a texture view, reference a subset of an exiting texture2d_array instance
texture2d_array(
texture2d_array const& Texture,
size_type BaseLayer, size_type MaxLayer,
size_type BaseLevel, size_type MaxLevel);
/// Create a view of the texture identified by Layer in the texture array
texture2d operator[](size_type Layer) const;
/// Return the dimensions of a texture instance: width and height
extent_type extent(size_type Level = 0) const;
/// Fetch a texel from a texture. The texture format must be uncompressed.
template <typename gen_type>
gen_type load(extent_type const& TexelCoord, size_type Layer, size_type Level) const;
/// Write a texel to a texture. The texture format must be uncompressed.
template <typename gen_type>
void store(extent_type const& TexelCoord, size_type Layer, size_type Level, gen_type const& Texel);
};
}//namespace gli
#include "./core/texture2d_array.inl"
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 2020 WildFireChat. All rights reserved.
*/
import UserSettingScope from "../client/userSettingScope";
export default class UserSettingEntry {
scope = UserSettingScope.kUserSettingCustomBegin;
key = '';
value = '';
updateDt = 0;
}
| {
"pile_set_name": "Github"
} |
package spire
package math
package poly
import java.math.MathContext
/**
* A type class that can find roots of a polynomial.
*/
trait RootFinder[A] {
/**
* Returns the roots of the polynomial `poly`.
*/
def findRoots(poly: Polynomial[A]): Roots[A]
}
object RootFinder {
final def apply[A](implicit finder: RootFinder[A]): RootFinder[A] = finder
implicit def BigDecimalScaleRootFinder(scale: Int): RootFinder[BigDecimal] =
new RootFinder[BigDecimal] {
def findRoots(poly: Polynomial[BigDecimal]): Roots[BigDecimal] =
new BigDecimalSimpleRoots(poly, scale)
}
implicit def BigDecimalMathContextRootFinder(mc: MathContext): RootFinder[BigDecimal] =
new RootFinder[BigDecimal] {
def findRoots(poly: Polynomial[BigDecimal]): Roots[BigDecimal] =
new BigDecimalRelativeRoots(poly, mc)
}
implicit val RealRootFinder: RootFinder[Real] =
new RootFinder[Real] {
def findRoots(p: Polynomial[Real]): Roots[Real] =
new FixedRealRoots(p)
}
implicit val NumberRootFinder: RootFinder[Number] =
new RootFinder[Number] {
def findRoots(p: Polynomial[Number]): Roots[Number] =
new NumberRoots(p)
}
}
| {
"pile_set_name": "Github"
} |
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin,!race linux,!race freebsd,!race netbsd openbsd solaris dragonfly
package unix
import (
"unsafe"
)
const raceenabled = false
func raceAcquire(addr unsafe.Pointer) {
}
func raceReleaseMerge(addr unsafe.Pointer) {
}
func raceReadRange(addr unsafe.Pointer, len int) {
}
func raceWriteRange(addr unsafe.Pointer, len int) {
}
| {
"pile_set_name": "Github"
} |
#include "net/ip/uip.h"
#include "net/ipv6/uip-ds6.h"
#include <string.h>
#include "ip64-eth-interface.h"
#define UIP_IP_BUF ((struct uip_ip_hdr *)&uip_buf[UIP_LLH_LEN])
#define DEBUG DEBUG_NONE
#include "net/ip/uip-debug.h"
static void
init(void)
{
PRINTF("eth-bridge: init\n");
ip64_eth_interface.init();
}
/*---------------------------------------------------------------------------*/
static int
output()
{
PRINTF("eth-bridge: src=");
PRINT6ADDR(&UIP_IP_BUF->srcipaddr);
PRINTF(" dst=");
PRINT6ADDR(&UIP_IP_BUF->destipaddr);
PRINTF("\n");
ip64_eth_interface.output();
return 0;
}
/*---------------------------------------------------------------------------*/
const struct uip_fallback_interface rpl_interface = {
init, output
};
/*---------------------------------------------------------------------------*/
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<html>
<head>
<title>Ruby on Rails: Welcome aboard</title>
<style type="text/css" media="screen">
body {
margin: 0;
margin-bottom: 25px;
padding: 0;
background-color: #f0f0f0;
font-family: "Lucida Grande", "Bitstream Vera Sans", "Verdana";
font-size: 13px;
color: #333;
}
h1 {
font-size: 28px;
color: #000;
}
a {color: #03c}
a:hover {
background-color: #03c;
color: white;
text-decoration: none;
}
#page {
background-color: #f0f0f0;
width: 750px;
margin: 0;
margin-left: auto;
margin-right: auto;
}
#content {
float: left;
background-color: white;
border: 3px solid #aaa;
border-top: none;
padding: 25px;
width: 500px;
}
#sidebar {
float: right;
width: 175px;
}
#footer {
clear: both;
}
#header, #about, #getting-started {
padding-left: 75px;
padding-right: 30px;
}
#header {
background-image: url("assets/rails.png");
background-repeat: no-repeat;
background-position: top left;
height: 64px;
}
#header h1, #header h2 {margin: 0}
#header h2 {
color: #888;
font-weight: normal;
font-size: 16px;
}
#about h3 {
margin: 0;
margin-bottom: 10px;
font-size: 14px;
}
#about-content {
background-color: #ffd;
border: 1px solid #fc0;
margin-left: -55px;
margin-right: -10px;
}
#about-content table {
margin-top: 10px;
margin-bottom: 10px;
font-size: 11px;
border-collapse: collapse;
}
#about-content td {
padding: 10px;
padding-top: 3px;
padding-bottom: 3px;
}
#about-content td.name {color: #555}
#about-content td.value {color: #000}
#about-content ul {
padding: 0;
list-style-type: none;
}
#about-content.failure {
background-color: #fcc;
border: 1px solid #f00;
}
#about-content.failure p {
margin: 0;
padding: 10px;
}
#getting-started {
border-top: 1px solid #ccc;
margin-top: 25px;
padding-top: 15px;
}
#getting-started h1 {
margin: 0;
font-size: 20px;
}
#getting-started h2 {
margin: 0;
font-size: 14px;
font-weight: normal;
color: #333;
margin-bottom: 25px;
}
#getting-started ol {
margin-left: 0;
padding-left: 0;
}
#getting-started li {
font-size: 18px;
color: #888;
margin-bottom: 25px;
}
#getting-started li h2 {
margin: 0;
font-weight: normal;
font-size: 18px;
color: #333;
}
#getting-started li p {
color: #555;
font-size: 13px;
}
#sidebar ul {
margin-left: 0;
padding-left: 0;
}
#sidebar ul h3 {
margin-top: 25px;
font-size: 16px;
padding-bottom: 10px;
border-bottom: 1px solid #ccc;
}
#sidebar li {
list-style-type: none;
}
#sidebar ul.links li {
margin-bottom: 5px;
}
.filename {
font-style: italic;
}
</style>
<script type="text/javascript">
function about() {
info = document.getElementById('about-content');
if (window.XMLHttpRequest)
{ xhr = new XMLHttpRequest(); }
else
{ xhr = new ActiveXObject("Microsoft.XMLHTTP"); }
xhr.open("GET","rails/info/properties",false);
xhr.send("");
info.innerHTML = xhr.responseText;
info.style.display = 'block'
}
</script>
</head>
<body>
<div id="page">
<div id="sidebar">
<ul id="sidebar-items">
<li>
<h3>Browse the documentation</h3>
<ul class="links">
<li><a href="http://guides.rubyonrails.org/">Rails Guides</a></li>
<li><a href="http://api.rubyonrails.org/">Rails API</a></li>
<li><a href="http://www.ruby-doc.org/core/">Ruby core</a></li>
<li><a href="http://www.ruby-doc.org/stdlib/">Ruby standard library</a></li>
</ul>
</li>
</ul>
</div>
<div id="content">
<div id="header">
<h1>Welcome aboard</h1>
<h2>You’re riding Ruby on Rails!</h2>
</div>
<div id="about">
<h3><a href="rails/info/properties" onclick="about(); return false">About your application’s environment</a></h3>
<div id="about-content" style="display: none"></div>
</div>
<div id="getting-started">
<h1>Getting started</h1>
<h2>Here’s how to get rolling:</h2>
<ol>
<li>
<h2>Use <code>rails generate</code> to create your models and controllers</h2>
<p>To see all available options, run it without parameters.</p>
</li>
<li>
<h2>Set up a default route and remove <span class="filename">public/index.html</span></h2>
<p>Routes are set up in <span class="filename">config/routes.rb</span>.</p>
</li>
<li>
<h2>Create your database</h2>
<p>Run <code>rake db:create</code> to create your database. If you're not using SQLite (the default), edit <span class="filename">config/database.yml</span> with your username and password.</p>
</li>
</ol>
</div>
</div>
<div id="footer"> </div>
</div>
</body>
</html>
| {
"pile_set_name": "Github"
} |
Class: Drag {#Drag}
===================
Enables the modification of two CSS properties of an Element based on the position of the mouse while the mouse button is down.
### Implements
[Events][], [Chain][], [Options][]
Drag Method: constructor
------------------------
### Syntax
var myDragInstance = new Drag(el[, options]);
### Arguments
1. el - (*element*) The Element to apply the transformations to.
2. options - (*object*, optional) The options object.
### Options
* grid - (*number*: defaults to false) Distance in pixels for snap-to-grid dragging.
* handle - (*element*: defaults to the element passed in) The Element to act as the handle for the draggable element.
* invert - (*boolean*: defaults to false) Whether or not to invert the values reported on start and drag.
* limit - (*object*: defaults to false) An object with an x and a y property, both an array containing the minimum and maximum limit of movement of the Element.
* modifiers - (*object*: defaults to {'x': 'left', 'y': 'top'}) An object with x and y properties used to indicate the CSS modifiers (i.e. 'left').
* snap - (*number*: defaults to 6) The distance to drag before the Element starts to respond to the drag.
* style - (*boolean*: defaults to true) Whether or not to set the modifier as a style property of the element.
* unit - (*string*: defaults to 'px') A string indicating the CSS unit to append to all number values.
* preventDefault - (*boolean*: defaults to false) Calls preventDefault on the event while dragging. See [Event:preventDefault][]
* stopPropagation - (*boolean*: defaults to false) Prevents the event from "bubbling" up in the DOM tree. See [Event:stopPropagation][]
* compensateScroll - (*boolean*: defaults to false) Compensates the drag element's position while scrolling.
### Events
* beforeStart - Executed before the Drag instance attaches the events. Receives the dragged element as an argument.
* start - Executed when the user starts to drag (on mousedown). Receives the dragged element and the event as arguments.
* snap - Executed when the user has dragged past the snap option. Receives the dragged element as an argument.
* drag - Executed on every step of the drag. Receives the dragged element and the event as arguments.
* complete - Executed when the user completes the drag. Receives the dragged element and the event as arguments.
* cancel - Executed when the user has cancelled the drag. Receives the dragged element as an argument.
### Examples
var myDrag = new Drag('myDraggable', {
snap: 0,
onSnap: function(el){
el.addClass('dragging');
},
onComplete: function(el){
el.removeClass('dragging');
}
});
//create an Adobe reader style drag to scroll container
var myDragScroller = new Drag('myContainer', {
style: false,
invert: true,
modifiers: {x: 'scrollLeft', y: 'scrollTop'}
});
// corresponding HTML and CSS
<div id="myContainer" style="overflow: auto; width: 300px; height: 300px;">
<!-- lots of text -->
</div>
### Notes
- Drag requires the page to be in [Standards Mode](http://hsivonen.iki.fi/doctype/).
### See Also
- [MDC: CSS Units][]
Drag Method: attach {#Drag:attach}
----------------------------------
Attaches the mouse listener to the handle, causing the Element to be draggable.
### Syntax
myDrag.attach();
### Returns
* (*object*) This Drag instance.
### Examples
var myDrag = new Drag('myElement').detach(); //The Element can't be dragged.
$('myActivator').addEvent('click', function(){
alert('Ok, now you can drag.');
myDrag.attach();
});
### See Also
- [document.id][], [Element:makeDraggable][], [Drag:detach](#Drag:detach), [Element:addEvent][]
Drag Method: detach {#Drag:detach}
----------------------------------
Detaches the mouse listener from the handle, preventing the Element from being dragged.
### Syntax
myDrag.detach();
### Returns
* (*object*) This Drag instance.
### Examples
var myDrag = new Drag('myElement');
$('myDeactivator').addEvent('click', function(){
alert('No more dragging for you, Mister.');
myDrag.detach();
});
### See Also
- [document.id][], [Element:makeDraggable][], [Element:addEvent][]
Drag Method: stop {#Drag:stop}
------------------------------
Stops (removes) all attached events from the Drag instance. If the event is passed, it executes the 'complete' Event.
### Syntax
myDrag.stop([event]);
### Arguments
1. event - (*event*) the Event that is fired (typically by mouseup). This is passed along to the 'complete' Event in addition to the element that was dragged. If you pass along any truth-y value (i.e. not *false*, *zero*, etc) the 'complete' event will be fired and that value will be passed to the 'complete' event.
### Examples
var myDrag = new Drag('myElement', {
onSnap: function(){
this.moved = this.moved || 0;
this.moved++;
if (this.moved > 100){
this.stop();
alert("Stop! You'll make the Element angry.");
}
}
});
Type: Element {#Element}
==========================
Custom Type to allow all of its methods to be used with any DOM element via the document.id function [document.id][].
Element Method: makeResizable {#Element:makeResizable}
------------------------------------------------------
Adds drag-to-resize behavior to an Element using supplied options.
### Syntax
var myResize = myElement.makeResizable([options]);
### Arguments
1. options - (*object*, optional) See [Drag][#Drag] for acceptable options.
### Returns
* (*object*) The Drag instance that was created.
### Examples
var myResize = $('myElement').makeResizable({
onComplete: function(){
alert('Done resizing.');
}
});
### See Also
- [Drag](#Drag)
[document.id]: /core/Element/Element#Window:document-id
[Element:addEvent]: /core/Element/Element.Event/#Element:addEvent
[Element:makeDraggable]: /more/Drag/Drag.Move/#Element:makeDraggable
[Events]: /core/Class/Class.Extras#Events
[Event:preventDefault]: /core/Types/Event#Event:prevenDefault
[Event:stopPropagation]: /core/Types/Event#Event:stopPropagation
[Chain]: /core/Class/Class.Extras#Chain
[Options]: /core/Class/Class.Extras#Options
[MDC: CSS Units]: https://developer.mozilla.org/en/CSS-2_Quick_Reference/Units
| {
"pile_set_name": "Github"
} |
{
"private": true,
"dependencies": {
"redux": "^4.0.1"
}
}
| {
"pile_set_name": "Github"
} |
// Copyright 2020 the u-root Authors. All rights reserved
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// SPDX-License-Identifier: BSD-3-Clause
//
package uefivars
import (
"os"
"testing"
"github.com/u-root/u-root/pkg/uefivars/vartest"
)
// main is needed to extract the testdata from a zip to temp dir, and to clean
// up the temp dir after
func TestMain(m *testing.M) {
efiVarDir, cleanup, err := vartest.SetupVarZip("testdata/sys_fw_efi_vars.zip")
if err != nil {
panic(err)
}
EfiVarDir = efiVarDir
rc := m.Run()
cleanup()
os.Exit(rc)
}
| {
"pile_set_name": "Github"
} |
// mkerrors.sh -Wall -Werror -static -I/tmp/include -m64
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build amd64,linux
// Code generated by cmd/cgo -godefs; DO NOT EDIT.
// cgo -godefs -- -Wall -Werror -static -I/tmp/include -m64 _const.go
package unix
import "syscall"
const (
B1000000 = 0x1008
B115200 = 0x1002
B1152000 = 0x1009
B1500000 = 0x100a
B2000000 = 0x100b
B230400 = 0x1003
B2500000 = 0x100c
B3000000 = 0x100d
B3500000 = 0x100e
B4000000 = 0x100f
B460800 = 0x1004
B500000 = 0x1005
B57600 = 0x1001
B576000 = 0x1006
B921600 = 0x1007
BLKBSZGET = 0x80081270
BLKBSZSET = 0x40081271
BLKFLSBUF = 0x1261
BLKFRAGET = 0x1265
BLKFRASET = 0x1264
BLKGETSIZE = 0x1260
BLKGETSIZE64 = 0x80081272
BLKPBSZGET = 0x127b
BLKRAGET = 0x1263
BLKRASET = 0x1262
BLKROGET = 0x125e
BLKROSET = 0x125d
BLKRRPART = 0x125f
BLKSECTGET = 0x1267
BLKSECTSET = 0x1266
BLKSSZGET = 0x1268
BOTHER = 0x1000
BS1 = 0x2000
BSDLY = 0x2000
CBAUD = 0x100f
CBAUDEX = 0x1000
CIBAUD = 0x100f0000
CLOCAL = 0x800
CR1 = 0x200
CR2 = 0x400
CR3 = 0x600
CRDLY = 0x600
CREAD = 0x80
CS6 = 0x10
CS7 = 0x20
CS8 = 0x30
CSIZE = 0x30
CSTOPB = 0x40
ECHOCTL = 0x200
ECHOE = 0x10
ECHOK = 0x20
ECHOKE = 0x800
ECHONL = 0x40
ECHOPRT = 0x400
EFD_CLOEXEC = 0x80000
EFD_NONBLOCK = 0x800
EPOLL_CLOEXEC = 0x80000
EXTPROC = 0x10000
FF1 = 0x8000
FFDLY = 0x8000
FLUSHO = 0x1000
FP_XSTATE_MAGIC2 = 0x46505845
FS_IOC_ENABLE_VERITY = 0x40806685
FS_IOC_GETFLAGS = 0x80086601
FS_IOC_GET_ENCRYPTION_NONCE = 0x8010661b
FS_IOC_GET_ENCRYPTION_POLICY = 0x400c6615
FS_IOC_GET_ENCRYPTION_PWSALT = 0x40106614
FS_IOC_SETFLAGS = 0x40086602
FS_IOC_SET_ENCRYPTION_POLICY = 0x800c6613
F_GETLK = 0x5
F_GETLK64 = 0x5
F_GETOWN = 0x9
F_RDLCK = 0x0
F_SETLK = 0x6
F_SETLK64 = 0x6
F_SETLKW = 0x7
F_SETLKW64 = 0x7
F_SETOWN = 0x8
F_UNLCK = 0x2
F_WRLCK = 0x1
HUPCL = 0x400
ICANON = 0x2
IEXTEN = 0x8000
IN_CLOEXEC = 0x80000
IN_NONBLOCK = 0x800
IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x7b9
ISIG = 0x1
IUCLC = 0x200
IXOFF = 0x1000
IXON = 0x400
MAP_32BIT = 0x40
MAP_ANON = 0x20
MAP_ANONYMOUS = 0x20
MAP_DENYWRITE = 0x800
MAP_EXECUTABLE = 0x1000
MAP_GROWSDOWN = 0x100
MAP_HUGETLB = 0x40000
MAP_LOCKED = 0x2000
MAP_NONBLOCK = 0x10000
MAP_NORESERVE = 0x4000
MAP_POPULATE = 0x8000
MAP_STACK = 0x20000
MAP_SYNC = 0x80000
MCL_CURRENT = 0x1
MCL_FUTURE = 0x2
MCL_ONFAULT = 0x4
NFDBITS = 0x40
NLDLY = 0x100
NOFLSH = 0x80
NS_GET_NSTYPE = 0xb703
NS_GET_OWNER_UID = 0xb704
NS_GET_PARENT = 0xb702
NS_GET_USERNS = 0xb701
OLCUC = 0x2
ONLCR = 0x4
O_APPEND = 0x400
O_ASYNC = 0x2000
O_CLOEXEC = 0x80000
O_CREAT = 0x40
O_DIRECT = 0x4000
O_DIRECTORY = 0x10000
O_DSYNC = 0x1000
O_EXCL = 0x80
O_FSYNC = 0x101000
O_LARGEFILE = 0x0
O_NDELAY = 0x800
O_NOATIME = 0x40000
O_NOCTTY = 0x100
O_NOFOLLOW = 0x20000
O_NONBLOCK = 0x800
O_PATH = 0x200000
O_RSYNC = 0x101000
O_SYNC = 0x101000
O_TMPFILE = 0x410000
O_TRUNC = 0x200
PARENB = 0x100
PARODD = 0x200
PENDIN = 0x4000
PERF_EVENT_IOC_DISABLE = 0x2401
PERF_EVENT_IOC_ENABLE = 0x2400
PERF_EVENT_IOC_ID = 0x80082407
PERF_EVENT_IOC_MODIFY_ATTRIBUTES = 0x4008240b
PERF_EVENT_IOC_PAUSE_OUTPUT = 0x40042409
PERF_EVENT_IOC_PERIOD = 0x40082404
PERF_EVENT_IOC_QUERY_BPF = 0xc008240a
PERF_EVENT_IOC_REFRESH = 0x2402
PERF_EVENT_IOC_RESET = 0x2403
PERF_EVENT_IOC_SET_BPF = 0x40042408
PERF_EVENT_IOC_SET_FILTER = 0x40082406
PERF_EVENT_IOC_SET_OUTPUT = 0x2405
PPPIOCATTACH = 0x4004743d
PPPIOCATTCHAN = 0x40047438
PPPIOCCONNECT = 0x4004743a
PPPIOCDETACH = 0x4004743c
PPPIOCDISCONN = 0x7439
PPPIOCGASYNCMAP = 0x80047458
PPPIOCGCHAN = 0x80047437
PPPIOCGDEBUG = 0x80047441
PPPIOCGFLAGS = 0x8004745a
PPPIOCGIDLE = 0x8010743f
PPPIOCGIDLE32 = 0x8008743f
PPPIOCGIDLE64 = 0x8010743f
PPPIOCGL2TPSTATS = 0x80487436
PPPIOCGMRU = 0x80047453
PPPIOCGRASYNCMAP = 0x80047455
PPPIOCGUNIT = 0x80047456
PPPIOCGXASYNCMAP = 0x80207450
PPPIOCSACTIVE = 0x40107446
PPPIOCSASYNCMAP = 0x40047457
PPPIOCSCOMPRESS = 0x4010744d
PPPIOCSDEBUG = 0x40047440
PPPIOCSFLAGS = 0x40047459
PPPIOCSMAXCID = 0x40047451
PPPIOCSMRRU = 0x4004743b
PPPIOCSMRU = 0x40047452
PPPIOCSNPMODE = 0x4008744b
PPPIOCSPASS = 0x40107447
PPPIOCSRASYNCMAP = 0x40047454
PPPIOCSXASYNCMAP = 0x4020744f
PPPIOCXFERUNIT = 0x744e
PR_SET_PTRACER_ANY = 0xffffffffffffffff
PTRACE_ARCH_PRCTL = 0x1e
PTRACE_GETFPREGS = 0xe
PTRACE_GETFPXREGS = 0x12
PTRACE_GET_THREAD_AREA = 0x19
PTRACE_OLDSETOPTIONS = 0x15
PTRACE_SETFPREGS = 0xf
PTRACE_SETFPXREGS = 0x13
PTRACE_SET_THREAD_AREA = 0x1a
PTRACE_SINGLEBLOCK = 0x21
PTRACE_SYSEMU = 0x1f
PTRACE_SYSEMU_SINGLESTEP = 0x20
RLIMIT_AS = 0x9
RLIMIT_MEMLOCK = 0x8
RLIMIT_NOFILE = 0x7
RLIMIT_NPROC = 0x6
RLIMIT_RSS = 0x5
RNDADDENTROPY = 0x40085203
RNDADDTOENTCNT = 0x40045201
RNDCLEARPOOL = 0x5206
RNDGETENTCNT = 0x80045200
RNDGETPOOL = 0x80085202
RNDRESEEDCRNG = 0x5207
RNDZAPENTCNT = 0x5204
RTC_AIE_OFF = 0x7002
RTC_AIE_ON = 0x7001
RTC_ALM_READ = 0x80247008
RTC_ALM_SET = 0x40247007
RTC_EPOCH_READ = 0x8008700d
RTC_EPOCH_SET = 0x4008700e
RTC_IRQP_READ = 0x8008700b
RTC_IRQP_SET = 0x4008700c
RTC_PIE_OFF = 0x7006
RTC_PIE_ON = 0x7005
RTC_PLL_GET = 0x80207011
RTC_PLL_SET = 0x40207012
RTC_RD_TIME = 0x80247009
RTC_SET_TIME = 0x4024700a
RTC_UIE_OFF = 0x7004
RTC_UIE_ON = 0x7003
RTC_VL_CLR = 0x7014
RTC_VL_READ = 0x80047013
RTC_WIE_OFF = 0x7010
RTC_WIE_ON = 0x700f
RTC_WKALM_RD = 0x80287010
RTC_WKALM_SET = 0x4028700f
SCM_TIMESTAMPING = 0x25
SCM_TIMESTAMPING_OPT_STATS = 0x36
SCM_TIMESTAMPING_PKTINFO = 0x3a
SCM_TIMESTAMPNS = 0x23
SCM_TXTIME = 0x3d
SCM_WIFI_STATUS = 0x29
SFD_CLOEXEC = 0x80000
SFD_NONBLOCK = 0x800
SIOCATMARK = 0x8905
SIOCGPGRP = 0x8904
SIOCGSTAMPNS_NEW = 0x80108907
SIOCGSTAMP_NEW = 0x80108906
SIOCINQ = 0x541b
SIOCOUTQ = 0x5411
SIOCSPGRP = 0x8902
SOCK_CLOEXEC = 0x80000
SOCK_DGRAM = 0x2
SOCK_NONBLOCK = 0x800
SOCK_STREAM = 0x1
SOL_SOCKET = 0x1
SO_ACCEPTCONN = 0x1e
SO_ATTACH_BPF = 0x32
SO_ATTACH_REUSEPORT_CBPF = 0x33
SO_ATTACH_REUSEPORT_EBPF = 0x34
SO_BINDTODEVICE = 0x19
SO_BINDTOIFINDEX = 0x3e
SO_BPF_EXTENSIONS = 0x30
SO_BROADCAST = 0x6
SO_BSDCOMPAT = 0xe
SO_BUSY_POLL = 0x2e
SO_CNX_ADVICE = 0x35
SO_COOKIE = 0x39
SO_DETACH_REUSEPORT_BPF = 0x44
SO_DOMAIN = 0x27
SO_DONTROUTE = 0x5
SO_ERROR = 0x4
SO_INCOMING_CPU = 0x31
SO_INCOMING_NAPI_ID = 0x38
SO_KEEPALIVE = 0x9
SO_LINGER = 0xd
SO_LOCK_FILTER = 0x2c
SO_MARK = 0x24
SO_MAX_PACING_RATE = 0x2f
SO_MEMINFO = 0x37
SO_NOFCS = 0x2b
SO_OOBINLINE = 0xa
SO_PASSCRED = 0x10
SO_PASSSEC = 0x22
SO_PEEK_OFF = 0x2a
SO_PEERCRED = 0x11
SO_PEERGROUPS = 0x3b
SO_PEERSEC = 0x1f
SO_PROTOCOL = 0x26
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
SO_REUSEADDR = 0x2
SO_REUSEPORT = 0xf
SO_RXQ_OVFL = 0x28
SO_SECURITY_AUTHENTICATION = 0x16
SO_SECURITY_ENCRYPTION_NETWORK = 0x18
SO_SECURITY_ENCRYPTION_TRANSPORT = 0x17
SO_SELECT_ERR_QUEUE = 0x2d
SO_SNDBUF = 0x7
SO_SNDBUFFORCE = 0x20
SO_SNDLOWAT = 0x13
SO_SNDTIMEO = 0x15
SO_SNDTIMEO_NEW = 0x43
SO_SNDTIMEO_OLD = 0x15
SO_TIMESTAMPING = 0x25
SO_TIMESTAMPING_NEW = 0x41
SO_TIMESTAMPING_OLD = 0x25
SO_TIMESTAMPNS = 0x23
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29
SO_ZEROCOPY = 0x3c
TAB1 = 0x800
TAB2 = 0x1000
TAB3 = 0x1800
TABDLY = 0x1800
TCFLSH = 0x540b
TCGETA = 0x5405
TCGETS = 0x5401
TCGETS2 = 0x802c542a
TCGETX = 0x5432
TCSAFLUSH = 0x2
TCSBRK = 0x5409
TCSBRKP = 0x5425
TCSETA = 0x5406
TCSETAF = 0x5408
TCSETAW = 0x5407
TCSETS = 0x5402
TCSETS2 = 0x402c542b
TCSETSF = 0x5404
TCSETSF2 = 0x402c542d
TCSETSW = 0x5403
TCSETSW2 = 0x402c542c
TCSETX = 0x5433
TCSETXF = 0x5434
TCSETXW = 0x5435
TCXONC = 0x540a
TFD_CLOEXEC = 0x80000
TFD_NONBLOCK = 0x800
TIOCCBRK = 0x5428
TIOCCONS = 0x541d
TIOCEXCL = 0x540c
TIOCGDEV = 0x80045432
TIOCGETD = 0x5424
TIOCGEXCL = 0x80045440
TIOCGICOUNT = 0x545d
TIOCGISO7816 = 0x80285442
TIOCGLCKTRMIOS = 0x5456
TIOCGPGRP = 0x540f
TIOCGPKT = 0x80045438
TIOCGPTLCK = 0x80045439
TIOCGPTN = 0x80045430
TIOCGPTPEER = 0x5441
TIOCGRS485 = 0x542e
TIOCGSERIAL = 0x541e
TIOCGSID = 0x5429
TIOCGSOFTCAR = 0x5419
TIOCGWINSZ = 0x5413
TIOCINQ = 0x541b
TIOCLINUX = 0x541c
TIOCMBIC = 0x5417
TIOCMBIS = 0x5416
TIOCMGET = 0x5415
TIOCMIWAIT = 0x545c
TIOCMSET = 0x5418
TIOCM_CAR = 0x40
TIOCM_CD = 0x40
TIOCM_CTS = 0x20
TIOCM_DSR = 0x100
TIOCM_RI = 0x80
TIOCM_RNG = 0x80
TIOCM_SR = 0x10
TIOCM_ST = 0x8
TIOCNOTTY = 0x5422
TIOCNXCL = 0x540d
TIOCOUTQ = 0x5411
TIOCPKT = 0x5420
TIOCSBRK = 0x5427
TIOCSCTTY = 0x540e
TIOCSERCONFIG = 0x5453
TIOCSERGETLSR = 0x5459
TIOCSERGETMULTI = 0x545a
TIOCSERGSTRUCT = 0x5458
TIOCSERGWILD = 0x5454
TIOCSERSETMULTI = 0x545b
TIOCSERSWILD = 0x5455
TIOCSER_TEMT = 0x1
TIOCSETD = 0x5423
TIOCSIG = 0x40045436
TIOCSISO7816 = 0xc0285443
TIOCSLCKTRMIOS = 0x5457
TIOCSPGRP = 0x5410
TIOCSPTLCK = 0x40045431
TIOCSRS485 = 0x542f
TIOCSSERIAL = 0x541f
TIOCSSOFTCAR = 0x541a
TIOCSTI = 0x5412
TIOCSWINSZ = 0x5414
TIOCVHANGUP = 0x5437
TOSTOP = 0x100
TUNATTACHFILTER = 0x401054d5
TUNDETACHFILTER = 0x401054d6
TUNGETDEVNETNS = 0x54e3
TUNGETFEATURES = 0x800454cf
TUNGETFILTER = 0x801054db
TUNGETIFF = 0x800454d2
TUNGETSNDBUF = 0x800454d3
TUNGETVNETBE = 0x800454df
TUNGETVNETHDRSZ = 0x800454d7
TUNGETVNETLE = 0x800454dd
TUNSETCARRIER = 0x400454e2
TUNSETDEBUG = 0x400454c9
TUNSETFILTEREBPF = 0x800454e1
TUNSETGROUP = 0x400454ce
TUNSETIFF = 0x400454ca
TUNSETIFINDEX = 0x400454da
TUNSETLINK = 0x400454cd
TUNSETNOCSUM = 0x400454c8
TUNSETOFFLOAD = 0x400454d0
TUNSETOWNER = 0x400454cc
TUNSETPERSIST = 0x400454cb
TUNSETQUEUE = 0x400454d9
TUNSETSNDBUF = 0x400454d4
TUNSETSTEERINGEBPF = 0x800454e0
TUNSETTXFILTER = 0x400454d1
TUNSETVNETBE = 0x400454de
TUNSETVNETHDRSZ = 0x400454d8
TUNSETVNETLE = 0x400454dc
UBI_IOCATT = 0x40186f40
UBI_IOCDET = 0x40046f41
UBI_IOCEBCH = 0x40044f02
UBI_IOCEBER = 0x40044f01
UBI_IOCEBISMAP = 0x80044f05
UBI_IOCEBMAP = 0x40084f03
UBI_IOCEBUNMAP = 0x40044f04
UBI_IOCMKVOL = 0x40986f00
UBI_IOCRMVOL = 0x40046f01
UBI_IOCRNVOL = 0x51106f03
UBI_IOCRPEB = 0x40046f04
UBI_IOCRSVOL = 0x400c6f02
UBI_IOCSETVOLPROP = 0x40104f06
UBI_IOCSPEB = 0x40046f05
UBI_IOCVOLCRBLK = 0x40804f07
UBI_IOCVOLRMBLK = 0x4f08
UBI_IOCVOLUP = 0x40084f00
VDISCARD = 0xd
VEOF = 0x4
VEOL = 0xb
VEOL2 = 0x10
VMIN = 0x6
VREPRINT = 0xc
VSTART = 0x8
VSTOP = 0x9
VSUSP = 0xa
VSWTC = 0x7
VT1 = 0x4000
VTDLY = 0x4000
VTIME = 0x5
VWERASE = 0xe
WDIOC_GETBOOTSTATUS = 0x80045702
WDIOC_GETPRETIMEOUT = 0x80045709
WDIOC_GETSTATUS = 0x80045701
WDIOC_GETSUPPORT = 0x80285700
WDIOC_GETTEMP = 0x80045703
WDIOC_GETTIMELEFT = 0x8004570a
WDIOC_GETTIMEOUT = 0x80045707
WDIOC_KEEPALIVE = 0x80045705
WDIOC_SETOPTIONS = 0x80045704
WORDSIZE = 0x40
XCASE = 0x4
XTABS = 0x1800
)
// Errors
const (
EADDRINUSE = syscall.Errno(0x62)
EADDRNOTAVAIL = syscall.Errno(0x63)
EADV = syscall.Errno(0x44)
EAFNOSUPPORT = syscall.Errno(0x61)
EALREADY = syscall.Errno(0x72)
EBADE = syscall.Errno(0x34)
EBADFD = syscall.Errno(0x4d)
EBADMSG = syscall.Errno(0x4a)
EBADR = syscall.Errno(0x35)
EBADRQC = syscall.Errno(0x38)
EBADSLT = syscall.Errno(0x39)
EBFONT = syscall.Errno(0x3b)
ECANCELED = syscall.Errno(0x7d)
ECHRNG = syscall.Errno(0x2c)
ECOMM = syscall.Errno(0x46)
ECONNABORTED = syscall.Errno(0x67)
ECONNREFUSED = syscall.Errno(0x6f)
ECONNRESET = syscall.Errno(0x68)
EDEADLK = syscall.Errno(0x23)
EDEADLOCK = syscall.Errno(0x23)
EDESTADDRREQ = syscall.Errno(0x59)
EDOTDOT = syscall.Errno(0x49)
EDQUOT = syscall.Errno(0x7a)
EHOSTDOWN = syscall.Errno(0x70)
EHOSTUNREACH = syscall.Errno(0x71)
EHWPOISON = syscall.Errno(0x85)
EIDRM = syscall.Errno(0x2b)
EILSEQ = syscall.Errno(0x54)
EINPROGRESS = syscall.Errno(0x73)
EISCONN = syscall.Errno(0x6a)
EISNAM = syscall.Errno(0x78)
EKEYEXPIRED = syscall.Errno(0x7f)
EKEYREJECTED = syscall.Errno(0x81)
EKEYREVOKED = syscall.Errno(0x80)
EL2HLT = syscall.Errno(0x33)
EL2NSYNC = syscall.Errno(0x2d)
EL3HLT = syscall.Errno(0x2e)
EL3RST = syscall.Errno(0x2f)
ELIBACC = syscall.Errno(0x4f)
ELIBBAD = syscall.Errno(0x50)
ELIBEXEC = syscall.Errno(0x53)
ELIBMAX = syscall.Errno(0x52)
ELIBSCN = syscall.Errno(0x51)
ELNRNG = syscall.Errno(0x30)
ELOOP = syscall.Errno(0x28)
EMEDIUMTYPE = syscall.Errno(0x7c)
EMSGSIZE = syscall.Errno(0x5a)
EMULTIHOP = syscall.Errno(0x48)
ENAMETOOLONG = syscall.Errno(0x24)
ENAVAIL = syscall.Errno(0x77)
ENETDOWN = syscall.Errno(0x64)
ENETRESET = syscall.Errno(0x66)
ENETUNREACH = syscall.Errno(0x65)
ENOANO = syscall.Errno(0x37)
ENOBUFS = syscall.Errno(0x69)
ENOCSI = syscall.Errno(0x32)
ENODATA = syscall.Errno(0x3d)
ENOKEY = syscall.Errno(0x7e)
ENOLCK = syscall.Errno(0x25)
ENOLINK = syscall.Errno(0x43)
ENOMEDIUM = syscall.Errno(0x7b)
ENOMSG = syscall.Errno(0x2a)
ENONET = syscall.Errno(0x40)
ENOPKG = syscall.Errno(0x41)
ENOPROTOOPT = syscall.Errno(0x5c)
ENOSR = syscall.Errno(0x3f)
ENOSTR = syscall.Errno(0x3c)
ENOSYS = syscall.Errno(0x26)
ENOTCONN = syscall.Errno(0x6b)
ENOTEMPTY = syscall.Errno(0x27)
ENOTNAM = syscall.Errno(0x76)
ENOTRECOVERABLE = syscall.Errno(0x83)
ENOTSOCK = syscall.Errno(0x58)
ENOTSUP = syscall.Errno(0x5f)
ENOTUNIQ = syscall.Errno(0x4c)
EOPNOTSUPP = syscall.Errno(0x5f)
EOVERFLOW = syscall.Errno(0x4b)
EOWNERDEAD = syscall.Errno(0x82)
EPFNOSUPPORT = syscall.Errno(0x60)
EPROTO = syscall.Errno(0x47)
EPROTONOSUPPORT = syscall.Errno(0x5d)
EPROTOTYPE = syscall.Errno(0x5b)
EREMCHG = syscall.Errno(0x4e)
EREMOTE = syscall.Errno(0x42)
EREMOTEIO = syscall.Errno(0x79)
ERESTART = syscall.Errno(0x55)
ERFKILL = syscall.Errno(0x84)
ESHUTDOWN = syscall.Errno(0x6c)
ESOCKTNOSUPPORT = syscall.Errno(0x5e)
ESRMNT = syscall.Errno(0x45)
ESTALE = syscall.Errno(0x74)
ESTRPIPE = syscall.Errno(0x56)
ETIME = syscall.Errno(0x3e)
ETIMEDOUT = syscall.Errno(0x6e)
ETOOMANYREFS = syscall.Errno(0x6d)
EUCLEAN = syscall.Errno(0x75)
EUNATCH = syscall.Errno(0x31)
EUSERS = syscall.Errno(0x57)
EXFULL = syscall.Errno(0x36)
)
// Signals
const (
SIGBUS = syscall.Signal(0x7)
SIGCHLD = syscall.Signal(0x11)
SIGCLD = syscall.Signal(0x11)
SIGCONT = syscall.Signal(0x12)
SIGIO = syscall.Signal(0x1d)
SIGPOLL = syscall.Signal(0x1d)
SIGPROF = syscall.Signal(0x1b)
SIGPWR = syscall.Signal(0x1e)
SIGSTKFLT = syscall.Signal(0x10)
SIGSTOP = syscall.Signal(0x13)
SIGSYS = syscall.Signal(0x1f)
SIGTSTP = syscall.Signal(0x14)
SIGTTIN = syscall.Signal(0x15)
SIGTTOU = syscall.Signal(0x16)
SIGURG = syscall.Signal(0x17)
SIGUSR1 = syscall.Signal(0xa)
SIGUSR2 = syscall.Signal(0xc)
SIGVTALRM = syscall.Signal(0x1a)
SIGWINCH = syscall.Signal(0x1c)
SIGXCPU = syscall.Signal(0x18)
SIGXFSZ = syscall.Signal(0x19)
)
// Error table
var errorList = [...]struct {
num syscall.Errno
name string
desc string
}{
{1, "EPERM", "operation not permitted"},
{2, "ENOENT", "no such file or directory"},
{3, "ESRCH", "no such process"},
{4, "EINTR", "interrupted system call"},
{5, "EIO", "input/output error"},
{6, "ENXIO", "no such device or address"},
{7, "E2BIG", "argument list too long"},
{8, "ENOEXEC", "exec format error"},
{9, "EBADF", "bad file descriptor"},
{10, "ECHILD", "no child processes"},
{11, "EAGAIN", "resource temporarily unavailable"},
{12, "ENOMEM", "cannot allocate memory"},
{13, "EACCES", "permission denied"},
{14, "EFAULT", "bad address"},
{15, "ENOTBLK", "block device required"},
{16, "EBUSY", "device or resource busy"},
{17, "EEXIST", "file exists"},
{18, "EXDEV", "invalid cross-device link"},
{19, "ENODEV", "no such device"},
{20, "ENOTDIR", "not a directory"},
{21, "EISDIR", "is a directory"},
{22, "EINVAL", "invalid argument"},
{23, "ENFILE", "too many open files in system"},
{24, "EMFILE", "too many open files"},
{25, "ENOTTY", "inappropriate ioctl for device"},
{26, "ETXTBSY", "text file busy"},
{27, "EFBIG", "file too large"},
{28, "ENOSPC", "no space left on device"},
{29, "ESPIPE", "illegal seek"},
{30, "EROFS", "read-only file system"},
{31, "EMLINK", "too many links"},
{32, "EPIPE", "broken pipe"},
{33, "EDOM", "numerical argument out of domain"},
{34, "ERANGE", "numerical result out of range"},
{35, "EDEADLK", "resource deadlock avoided"},
{36, "ENAMETOOLONG", "file name too long"},
{37, "ENOLCK", "no locks available"},
{38, "ENOSYS", "function not implemented"},
{39, "ENOTEMPTY", "directory not empty"},
{40, "ELOOP", "too many levels of symbolic links"},
{42, "ENOMSG", "no message of desired type"},
{43, "EIDRM", "identifier removed"},
{44, "ECHRNG", "channel number out of range"},
{45, "EL2NSYNC", "level 2 not synchronized"},
{46, "EL3HLT", "level 3 halted"},
{47, "EL3RST", "level 3 reset"},
{48, "ELNRNG", "link number out of range"},
{49, "EUNATCH", "protocol driver not attached"},
{50, "ENOCSI", "no CSI structure available"},
{51, "EL2HLT", "level 2 halted"},
{52, "EBADE", "invalid exchange"},
{53, "EBADR", "invalid request descriptor"},
{54, "EXFULL", "exchange full"},
{55, "ENOANO", "no anode"},
{56, "EBADRQC", "invalid request code"},
{57, "EBADSLT", "invalid slot"},
{59, "EBFONT", "bad font file format"},
{60, "ENOSTR", "device not a stream"},
{61, "ENODATA", "no data available"},
{62, "ETIME", "timer expired"},
{63, "ENOSR", "out of streams resources"},
{64, "ENONET", "machine is not on the network"},
{65, "ENOPKG", "package not installed"},
{66, "EREMOTE", "object is remote"},
{67, "ENOLINK", "link has been severed"},
{68, "EADV", "advertise error"},
{69, "ESRMNT", "srmount error"},
{70, "ECOMM", "communication error on send"},
{71, "EPROTO", "protocol error"},
{72, "EMULTIHOP", "multihop attempted"},
{73, "EDOTDOT", "RFS specific error"},
{74, "EBADMSG", "bad message"},
{75, "EOVERFLOW", "value too large for defined data type"},
{76, "ENOTUNIQ", "name not unique on network"},
{77, "EBADFD", "file descriptor in bad state"},
{78, "EREMCHG", "remote address changed"},
{79, "ELIBACC", "can not access a needed shared library"},
{80, "ELIBBAD", "accessing a corrupted shared library"},
{81, "ELIBSCN", ".lib section in a.out corrupted"},
{82, "ELIBMAX", "attempting to link in too many shared libraries"},
{83, "ELIBEXEC", "cannot exec a shared library directly"},
{84, "EILSEQ", "invalid or incomplete multibyte or wide character"},
{85, "ERESTART", "interrupted system call should be restarted"},
{86, "ESTRPIPE", "streams pipe error"},
{87, "EUSERS", "too many users"},
{88, "ENOTSOCK", "socket operation on non-socket"},
{89, "EDESTADDRREQ", "destination address required"},
{90, "EMSGSIZE", "message too long"},
{91, "EPROTOTYPE", "protocol wrong type for socket"},
{92, "ENOPROTOOPT", "protocol not available"},
{93, "EPROTONOSUPPORT", "protocol not supported"},
{94, "ESOCKTNOSUPPORT", "socket type not supported"},
{95, "ENOTSUP", "operation not supported"},
{96, "EPFNOSUPPORT", "protocol family not supported"},
{97, "EAFNOSUPPORT", "address family not supported by protocol"},
{98, "EADDRINUSE", "address already in use"},
{99, "EADDRNOTAVAIL", "cannot assign requested address"},
{100, "ENETDOWN", "network is down"},
{101, "ENETUNREACH", "network is unreachable"},
{102, "ENETRESET", "network dropped connection on reset"},
{103, "ECONNABORTED", "software caused connection abort"},
{104, "ECONNRESET", "connection reset by peer"},
{105, "ENOBUFS", "no buffer space available"},
{106, "EISCONN", "transport endpoint is already connected"},
{107, "ENOTCONN", "transport endpoint is not connected"},
{108, "ESHUTDOWN", "cannot send after transport endpoint shutdown"},
{109, "ETOOMANYREFS", "too many references: cannot splice"},
{110, "ETIMEDOUT", "connection timed out"},
{111, "ECONNREFUSED", "connection refused"},
{112, "EHOSTDOWN", "host is down"},
{113, "EHOSTUNREACH", "no route to host"},
{114, "EALREADY", "operation already in progress"},
{115, "EINPROGRESS", "operation now in progress"},
{116, "ESTALE", "stale file handle"},
{117, "EUCLEAN", "structure needs cleaning"},
{118, "ENOTNAM", "not a XENIX named type file"},
{119, "ENAVAIL", "no XENIX semaphores available"},
{120, "EISNAM", "is a named type file"},
{121, "EREMOTEIO", "remote I/O error"},
{122, "EDQUOT", "disk quota exceeded"},
{123, "ENOMEDIUM", "no medium found"},
{124, "EMEDIUMTYPE", "wrong medium type"},
{125, "ECANCELED", "operation canceled"},
{126, "ENOKEY", "required key not available"},
{127, "EKEYEXPIRED", "key has expired"},
{128, "EKEYREVOKED", "key has been revoked"},
{129, "EKEYREJECTED", "key was rejected by service"},
{130, "EOWNERDEAD", "owner died"},
{131, "ENOTRECOVERABLE", "state not recoverable"},
{132, "ERFKILL", "operation not possible due to RF-kill"},
{133, "EHWPOISON", "memory page has hardware error"},
}
// Signal table
var signalList = [...]struct {
num syscall.Signal
name string
desc string
}{
{1, "SIGHUP", "hangup"},
{2, "SIGINT", "interrupt"},
{3, "SIGQUIT", "quit"},
{4, "SIGILL", "illegal instruction"},
{5, "SIGTRAP", "trace/breakpoint trap"},
{6, "SIGABRT", "aborted"},
{7, "SIGBUS", "bus error"},
{8, "SIGFPE", "floating point exception"},
{9, "SIGKILL", "killed"},
{10, "SIGUSR1", "user defined signal 1"},
{11, "SIGSEGV", "segmentation fault"},
{12, "SIGUSR2", "user defined signal 2"},
{13, "SIGPIPE", "broken pipe"},
{14, "SIGALRM", "alarm clock"},
{15, "SIGTERM", "terminated"},
{16, "SIGSTKFLT", "stack fault"},
{17, "SIGCHLD", "child exited"},
{18, "SIGCONT", "continued"},
{19, "SIGSTOP", "stopped (signal)"},
{20, "SIGTSTP", "stopped"},
{21, "SIGTTIN", "stopped (tty input)"},
{22, "SIGTTOU", "stopped (tty output)"},
{23, "SIGURG", "urgent I/O condition"},
{24, "SIGXCPU", "CPU time limit exceeded"},
{25, "SIGXFSZ", "file size limit exceeded"},
{26, "SIGVTALRM", "virtual timer expired"},
{27, "SIGPROF", "profiling timer expired"},
{28, "SIGWINCH", "window changed"},
{29, "SIGIO", "I/O possible"},
{30, "SIGPWR", "power failure"},
{31, "SIGSYS", "bad system call"},
}
| {
"pile_set_name": "Github"
} |
# Mantid Repository : https://github.com/mantidproject/mantid
#
# Copyright © 2018 ISIS Rutherford Appleton Laboratory UKRI,
# NScD Oak Ridge National Laboratory, European Spallation Source,
# Institut Laue - Langevin & CSNS, Institute of High Energy Physics, CAS
# SPDX - License - Identifier: GPL - 3.0 +
import os
import sys
from tempfile import TemporaryDirectory
from mantid.simpleapi import *
from mantid import api, config
from Direct.ReductionWrapper import *
import MariReduction as mr
#
import unittest
import imp
class test_helper(ReductionWrapper):
def __init__(self, web_var=None):
""" sets properties defaults for the instrument with Name"""
ReductionWrapper.__init__(self, 'MAR', web_var)
def set_custom_output_filename(self):
"""define custom name of output files if standard one is not satisfactory
In addition to that, example of accessing reduction properties
Changing them if necessary
"""
def custom_name(prop_man):
""" sample function which builds filename from
incident energy and run number and adds some auxiliary information
to it.
"""
# Note -- properties have the same names as the list of advanced and
# main properties
ei = PropertyManager.incident_energy.get_current()
# sample run is more then just list of runs, so we use
# the formalization below to access its methods
run_num = prop_man.sample_run
name = "SOMETHING{0}_{1:<3.2f}meV_rings".format(run_num, ei)
return name
# Uncomment this to use custom filename function
# Note: the properties are stored in prop_man class accessed as
# below.
return lambda: custom_name(self.reducer.prop_man)
# use this method to use standard file name generating function
#return None
@iliad
def reduce(self, input_file=None, output_directory=None):
self.reducer._clear_old_results()
if input_file:
self.reducer.prop_man.sample_run = input_file
run = self.reducer.prop_man.sample_run
result = []
if PropertyManager.incident_energy.multirep_mode():
en_range = self.reducer.prop_man.incident_energy
for ind, en in enumerate(en_range):
ws = CreateSampleWorkspace()
AddSampleLog(ws, LogName='run_number', LogText=str(run))
PropertyManager.sample_run.set_action_suffix('#{0}_reduced'.format(ind + 1))
PropertyManager.sample_run.synchronize_ws(ws)
result.append(ws)
self.reducer._old_runs_list.append(ws.name())
else:
ws = CreateSampleWorkspace()
AddSampleLog(ws, LogName='run_number', LogText=str(run))
PropertyManager.sample_run.set_action_suffix('_reduced')
PropertyManager.sample_run.synchronize_ws(ws)
result.append(ws)
if len(result) == 1:
result = result[0]
return result
#-----------------------------------------------------------------------------------------------------------------------------------------
#-----------------------------------------------------------------------------------------------------------------------------------------
#-----------------------------------------------------------------------------------------------------------------------------------------
#-----------------------------------------------------------------------------------------------------------------------------------------
class ReductionWrapperTest(unittest.TestCase):
def __init__(self, methodName):
return super(ReductionWrapperTest, self).__init__(methodName)
def setUp(self):
pass
def tearDown(self):
pass
def test_default_fails(self):
red = ReductionWrapper('MAR')
self.assertRaises(NotImplementedError, red.def_main_properties)
self.assertRaises(NotImplementedError, red.def_advanced_properties)
self.assertTrue('reduce' in dir(red))
def test_export_advanced_values(self):
red = mr.ReduceMARI()
main_prop = red.def_main_properties()
adv_prop = red.def_advanced_properties()
# see what have changed and what have changed as advanced properties.
all_changed_prop = red.reducer.prop_man.getChangedProperties()
self.assertEqual(set(list(main_prop.keys()) + list(adv_prop.keys())), all_changed_prop)
with TemporaryDirectory() as reduce_vars_dir:
reduce_vars_file = os.path.join(reduce_vars_dir, 'reduce_vars.py')
# save web variables
red.save_web_variables(reduce_vars_file)
self.assertTrue(os.path.isfile(reduce_vars_file))
# restore saved parameters.
sys.path.insert(0, reduce_vars_dir)
import reduce_vars as rv
self.assertDictEqual(rv.standard_vars, main_prop)
self.assertDictEqual(rv.advanced_vars, adv_prop)
self.assertTrue(hasattr(rv, 'variable_help'))
imp.reload(mr)
# tis will run MARI reduction, which probably not work from unit tests
# will move this to system tests
#rez = mr.main()
self.assertTrue(mr.web_var)
self.assertEqual(mr.web_var.standard_vars, main_prop)
self.assertEqual(mr.web_var.advanced_vars, adv_prop)
def test_validate_settings(self):
dsp = config.getDataSearchDirs()
# clear all not to find any files
config.setDataSearchDirs('')
red = mr.ReduceMARI()
ok, level, errors = red.validate_settings()
self.assertFalse(ok)
self.assertEqual(level, 2)
self.assertEqual(len(errors), 7)
# this run should be in data search directory for basic Mantid
red.reducer.wb_run = 11001
red.reducer.det_cal_file = '11001'
red.reducer.monovan_run = None
red.reducer.hard_mask_file = None
red.reducer.map_file = None
red.reducer.save_format = 'nxspe'
path = []
for item in dsp:
path.append(item)
config.setDataSearchDirs(path)
# hack -- let's pretend we are running from webservices
# but web var are empty (not to overwrite values above)
red._run_from_web = True
red._wvs.standard_vars = {}
red._wvs.advanced_vars = {}
ok, level, errors = red.validate_settings()
if not ok:
print("Errors found at level", level)
print(errors)
self.assertTrue(ok)
self.assertEqual(level, 0)
self.assertEqual(len(errors), 0)
# this is how we set it up from web
red._wvs.advanced_vars = {'save_format': ''}
ok, level, errors = red.validate_settings()
self.assertFalse(ok)
self.assertEqual(level, 1)
self.assertEqual(len(errors), 1)
#
def test_set_from_constructor(self):
red = mr.ReduceMARI()
main_prop = red.def_main_properties()
adv_prop = red.def_advanced_properties()
adv_prop['map_file'] = 'some_map'
adv_prop['data_file_ext'] = '.nxs'
main_prop['sample_run'] = 10000
class ww(object):
def __init__(self):
self.standard_vars = None
self.advanced_vars = None
web_var = ww
web_var.standard_vars = main_prop
web_var.advanced_vars = adv_prop
red1 = mr.ReduceMARI(web_var)
self.assertTrue(red1._run_from_web)
self.assertEqual(red1.reducer.prop_man.map_file, 'some_map.map')
self.assertEqual(red1.reducer.prop_man.data_file_ext, '.nxs')
self.assertEqual(red1.reducer.prop_man.sample_run, 10000)
web_var.advanced_vars = None
web_var.standard_vars['sample_run'] = 2000
red2 = mr.ReduceMARI(web_var)
self.assertTrue(red2._run_from_web)
self.assertEqual(red2.reducer.prop_man.sample_run, 2000)
#
def test_custom_print_name(self):
th = test_helper()
th.reducer.prop_man.sample_run = 100
th.reducer.prop_man.incident_energy = [10.01, 20]
th.reduce()
save_file = th.reducer.prop_man.save_file_name
# such strange name because custom print function above access workspace,
# generated by reduction
self.assertEqual(save_file, 'SOMETHINGSR_MAR000100#2_reduced_10.01meV_rings')
PropertyManager.incident_energy.next()
save_file = th.reducer.prop_man.save_file_name
# now reduction have not been run, and the name is generated from run number
self.assertEqual(save_file, 'SOMETHINGSR_MAR000100#2_reduced_20.00meV_rings')
def test_return_run_list(self):
th = test_helper()
th.reducer.prop_man.sample_run = 200
th.run_reduction()
# standard reduction would save and delete workspace but our simplified one
# will just keep it
name = 'SR_MAR000200_reduced'
self.assertTrue(name in mtd)
th.reducer.prop_man.sample_run = 300
# new run deletes the old one
self.assertFalse(name in mtd)
rez = th.run_reduction()
self.assertTrue(isinstance(rez, api.Workspace))
self.assertTrue('rez' in mtd)
self.assertEqual(rez.name(), 'rez')
th.reducer.prop_man.sample_run = [300, 400]
th.run_reduction()
self.assertFalse('SR_MAR000300_reduced' in mtd)
self.assertTrue('SR_MAR000400_reduced' in mtd)
th.reducer.prop_man.sample_run = [500, 600]
self.assertFalse('SR_MAR000400_reduced' in mtd)
th.run_reduction()
self.assertFalse('SR_MAR000500_reduced' in mtd)
self.assertTrue('SR_MAR000600_reduced' in mtd)
th.reducer.prop_man.sample_run = [300, 400]
runs = th.run_reduction()
self.assertTrue('runs#1of2' in mtd)
self.assertTrue('runs#2of2' in mtd)
self.assertEqual(runs[0].name(), 'runs#1of2')
self.assertEqual(runs[1].name(), 'runs#2of2')
th.reducer.prop_man.incident_energy = [10, 20]
th.reducer.prop_man.sample_run = 300
th.run_reduction()
self.assertTrue('SR_MAR000300#1_reduced' in mtd)
self.assertTrue('SR_MAR000300#2_reduced' in mtd)
th.reducer.prop_man.sample_run = 400
th.run_reduction()
self.assertFalse('SR_MAR000300#1_reduced' in mtd)
self.assertFalse('SR_MAR000300#2_reduced' in mtd)
self.assertTrue('SR_MAR000400#1_reduced' in mtd)
self.assertTrue('SR_MAR000400#2_reduced' in mtd)
if __name__ == "__main__":
unittest.main()
| {
"pile_set_name": "Github"
} |
# -*- coding: utf-8 -*-
#
# Fuel documentation build configuration file, created by
# sphinx-quickstart2 on Wed Oct 8 17:59:44 2014.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
from mock import Mock as MagicMock
from sphinx.ext.autodoc import cut_lines
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
# on_rtd is whether we are on readthedocs.org
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if not on_rtd: # only import and set the theme if we're building docs locally
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.napoleon',
'sphinx.ext.todo',
'sphinx.ext.mathjax',
'sphinx.ext.graphviz',
'sphinx.ext.intersphinx',
'matplotlib.sphinxext.plot_directive',
]
intersphinx_mapping = {
'theano': ('http://theano.readthedocs.org/en/latest/', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'scipy': ('http://docs.scipy.org/doc/scipy/reference/', None),
'python': ('http://docs.python.org/3.4', None),
'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None)
}
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return Mock()
MOCK_MODULES = ['h5py', 'zmq']
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
graphviz_dot_args = ['-Gbgcolor=# fcfcfc'] # To match the RTD theme
# Render todo lists
todo_include_todos = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel'
copyright = u'2014, Université de Montréal'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
import fuel
version = '.'.join(fuel.__version__.split('.')[:2])
# The full version, including alpha/beta/rc tags.
release = fuel.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
# html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Fueldoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'Fuel.tex', u'Fuel Documentation',
u'Université de Montréal', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuel', u'Fuel Documentation',
[u'Université de Montréal'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Fuel', u'Fuel Documentation',
u'Université de Montréal', 'Fuel', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
def skip_abc(app, what, name, obj, skip, options):
return skip or name.startswith('_abc')
def setup(app):
app.connect('autodoc-process-docstring', cut_lines(2, what=['module']))
app.connect('autodoc-skip-member', skip_abc)
| {
"pile_set_name": "Github"
} |
let x = 0
for (let i = 0; i <= 4 - 1; i++) {
x = 2 ** i
basic.showNumber(x)
}
| {
"pile_set_name": "Github"
} |
package com.alibaba.druid.bvt.sql.eval;
import junit.framework.TestCase;
import org.junit.Assert;
import com.alibaba.druid.sql.visitor.SQLEvalVisitorUtils;
import com.alibaba.druid.util.JdbcConstants;
public class EvalBetweenTest extends TestCase {
public void test_between() throws Exception {
Assert.assertEquals(false, SQLEvalVisitorUtils.evalExpr(JdbcConstants.MYSQL, "? between 1 and 3", 0));
Assert.assertEquals(true, SQLEvalVisitorUtils.evalExpr(JdbcConstants.MYSQL, "? between 1 and 3", 2));
Assert.assertEquals(false, SQLEvalVisitorUtils.evalExpr(JdbcConstants.MYSQL, "? between 1 and 3", 4));
}
public void test_not_between() throws Exception {
Assert.assertEquals(true, SQLEvalVisitorUtils.evalExpr(JdbcConstants.MYSQL, "? not between 1 and 3", 0));
Assert.assertEquals(false, SQLEvalVisitorUtils.evalExpr(JdbcConstants.MYSQL, "? not between 1 and 3", 2));
Assert.assertEquals(true, SQLEvalVisitorUtils.evalExpr(JdbcConstants.MYSQL, "? not between 1 and 3", 4));
}
}
| {
"pile_set_name": "Github"
} |
/**
* Copyright Soramitsu Co., Ltd. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0
*/
#include "simulator/impl/simulator.hpp"
#include <vector>
#include <boost/range/adaptor/transformed.hpp>
#include <boost/range/algorithm/find.hpp>
#include "backend/protobuf/proto_block_factory.hpp"
#include "backend/protobuf/transaction.hpp"
#include "builders/protobuf/transaction.hpp"
#include "datetime/time.hpp"
#include "framework/test_logger.hpp"
#include "framework/test_subscriber.hpp"
#include "module/irohad/ametsuchi/mock_block_query.hpp"
#include "module/irohad/ametsuchi/mock_block_query_factory.hpp"
#include "module/irohad/ametsuchi/mock_temporary_factory.hpp"
#include "module/irohad/network/network_mocks.hpp"
#include "module/irohad/validation/mock_stateful_validator.hpp"
#include "module/shared_model/builders/protobuf/proposal.hpp"
#include "module/shared_model/builders/protobuf/test_block_builder.hpp"
#include "module/shared_model/builders/protobuf/test_proposal_builder.hpp"
#include "module/shared_model/cryptography/mock_abstract_crypto_model_signer.hpp"
#include "module/shared_model/validators/validators.hpp"
using namespace iroha;
using namespace iroha::validation;
using namespace iroha::ametsuchi;
using namespace iroha::simulator;
using namespace iroha::network;
using namespace framework::test_subscriber;
using ::testing::_;
using ::testing::A;
using ::testing::Invoke;
using ::testing::NiceMock;
using ::testing::Return;
using ::testing::ReturnArg;
using wBlock = std::shared_ptr<shared_model::interface::Block>;
class SimulatorTest : public ::testing::Test {
public:
using CryptoSignerType = shared_model::crypto::MockAbstractCryptoModelSigner<
shared_model::interface::Block>;
void SetUp() override {
validator = std::make_shared<MockStatefulValidator>();
factory = std::make_shared<NiceMock<MockTemporaryFactory>>();
query = std::make_shared<MockBlockQuery>();
ordering_gate = std::make_shared<MockOrderingGate>();
crypto_signer = std::make_shared<CryptoSignerType>();
block_query_factory = std::make_shared<MockBlockQueryFactory>();
EXPECT_CALL(*block_query_factory, createBlockQuery())
.WillRepeatedly(testing::Return(boost::make_optional(
std::shared_ptr<iroha::ametsuchi::BlockQuery>(query))));
block_factory = std::make_unique<shared_model::proto::ProtoBlockFactory>(
std::make_unique<shared_model::validation::MockValidator<
shared_model::interface::Block>>(),
std::make_unique<
shared_model::validation::MockValidator<iroha::protocol::Block>>());
EXPECT_CALL(*ordering_gate, onProposal())
.WillOnce(Return(ordering_events.get_observable()));
simulator = std::make_shared<Simulator>(ordering_gate,
validator,
factory,
block_query_factory,
crypto_signer,
std::move(block_factory),
getTestLogger("Simulator"));
}
consensus::Round round;
std::shared_ptr<MockStatefulValidator> validator;
std::shared_ptr<MockTemporaryFactory> factory;
std::shared_ptr<MockBlockQuery> query;
std::shared_ptr<MockBlockQueryFactory> block_query_factory;
std::shared_ptr<MockOrderingGate> ordering_gate;
std::shared_ptr<CryptoSignerType> crypto_signer;
std::unique_ptr<shared_model::interface::UnsafeBlockFactory> block_factory;
rxcpp::subjects::subject<OrderingEvent> ordering_events;
std::shared_ptr<Simulator> simulator;
};
shared_model::proto::Block makeBlock(int height) {
return TestBlockBuilder()
.transactions(std::vector<shared_model::proto::Transaction>())
.height(height)
.prevHash(shared_model::crypto::Hash(std::string(32, '0')))
.build();
}
auto makeProposal(int height) {
auto tx = shared_model::proto::TransactionBuilder()
.createdTime(iroha::time::now())
.creatorAccountId("admin@ru")
.addAssetQuantity("coin#coin", "1.0")
.quorum(1)
.build()
.signAndAddSignature(
shared_model::crypto::DefaultCryptoAlgorithmType::
generateKeypair())
.finish();
std::vector<shared_model::proto::Transaction> txs = {tx, tx};
auto proposal = shared_model::proto::ProposalBuilder()
.height(height)
.createdTime(iroha::time::now())
.transactions(txs)
.build();
return std::shared_ptr<const shared_model::interface::Proposal>(
std::make_shared<const shared_model::proto::Proposal>(
std::move(proposal)));
}
auto makeTx() {
return shared_model::proto::TransactionBuilder()
.createdTime(iroha::time::now())
.creatorAccountId("admin@ru")
.addAssetQuantity("coin#coin", "1.0")
.quorum(1)
.build()
.signAndAddSignature(
shared_model::crypto::DefaultCryptoAlgorithmType::generateKeypair())
.finish();
}
TEST_F(SimulatorTest, ValidWhenPreviousBlock) {
// proposal with height 2 => height 1 block present => new block generated
std::vector<shared_model::proto::Transaction> txs = {makeTx(), makeTx()};
auto validation_result =
std::make_unique<iroha::validation::VerifiedProposalAndErrors>();
validation_result->verified_proposal =
std::make_unique<shared_model::proto::Proposal>(
shared_model::proto::ProposalBuilder()
.height(2)
.createdTime(iroha::time::now())
.transactions(txs)
.build());
const auto &proposal = validation_result->verified_proposal;
shared_model::proto::Block block = makeBlock(proposal->height() - 1);
EXPECT_CALL(*factory, createTemporaryWsv()).Times(1);
EXPECT_CALL(*query, getTopBlock())
.WillOnce(Return(expected::makeValue(wBlock(clone(block)))));
EXPECT_CALL(*query, getTopBlockHeight()).WillOnce(Return(block.height()));
EXPECT_CALL(*validator, validate(_, _))
.WillOnce(Invoke([&validation_result](const auto &p, auto &v) {
return std::move(validation_result);
}));
EXPECT_CALL(*crypto_signer, sign(A<shared_model::interface::Block &>()))
.Times(1);
auto proposal_wrapper =
make_test_subscriber<CallExact>(simulator->onVerifiedProposal(), 1);
proposal_wrapper.subscribe([&](auto event) {
auto verification_result = getVerifiedProposalUnsafe(event);
auto verified_proposal = verification_result->verified_proposal;
EXPECT_EQ(verified_proposal->height(), proposal->height());
EXPECT_EQ(verified_proposal->transactions(), proposal->transactions());
EXPECT_TRUE(verification_result->rejected_transactions.empty());
});
auto block_wrapper = make_test_subscriber<CallExact>(simulator->onBlock(), 1);
block_wrapper.subscribe([&](auto event) {
auto block = getBlockUnsafe(event);
EXPECT_EQ(block->height(), proposal->height());
EXPECT_EQ(block->transactions(), proposal->transactions());
});
ordering_events.get_subscriber().on_next(
OrderingEvent{proposal, consensus::Round{}});
EXPECT_TRUE(proposal_wrapper.validate());
EXPECT_TRUE(block_wrapper.validate());
}
TEST_F(SimulatorTest, FailWhenNoBlock) {
// height 2 proposal => height 1 block not present => no validated proposal
auto proposal = makeProposal(2);
EXPECT_CALL(*factory, createTemporaryWsv()).Times(0);
EXPECT_CALL(*query, getTopBlock())
.WillOnce(Return(expected::makeError("no block")));
EXPECT_CALL(*validator, validate(_, _)).Times(0);
EXPECT_CALL(*crypto_signer, sign(A<shared_model::interface::Block &>()))
.Times(0);
auto proposal_wrapper =
make_test_subscriber<CallExact>(simulator->onVerifiedProposal(), 0);
proposal_wrapper.subscribe();
auto block_wrapper = make_test_subscriber<CallExact>(simulator->onBlock(), 0);
block_wrapper.subscribe();
ordering_events.get_subscriber().on_next(
OrderingEvent{proposal, consensus::Round{}});
ASSERT_TRUE(proposal_wrapper.validate());
ASSERT_TRUE(block_wrapper.validate());
}
TEST_F(SimulatorTest, FailWhenSameAsProposalHeight) {
// proposal with height 2 => height 2 block present => no validated proposal
auto proposal = makeProposal(2);
auto block = makeBlock(proposal->height());
EXPECT_CALL(*factory, createTemporaryWsv()).Times(0);
EXPECT_CALL(*query, getTopBlock())
.WillOnce(Return(expected::makeValue(wBlock(clone(block)))));
EXPECT_CALL(*validator, validate(_, _)).Times(0);
EXPECT_CALL(*crypto_signer, sign(A<shared_model::interface::Block &>()))
.Times(0);
auto proposal_wrapper =
make_test_subscriber<CallExact>(simulator->onVerifiedProposal(), 0);
proposal_wrapper.subscribe();
auto block_wrapper = make_test_subscriber<CallExact>(simulator->onBlock(), 0);
block_wrapper.subscribe();
ordering_events.get_subscriber().on_next(
OrderingEvent{proposal, consensus::Round{}});
ASSERT_TRUE(proposal_wrapper.validate());
ASSERT_TRUE(block_wrapper.validate());
}
/**
* Checks, that after failing a certain number of transactions in a proposal,
* returned verified proposal will have only valid transactions
*
* @given proposal consisting of several transactions
* @when failing some of the transactions in that proposal
* @then verified proposal consists of txs we did not fail, and the failed
* transactions are provided as well
*/
TEST_F(SimulatorTest, SomeFailingTxs) {
// create a 3-height proposal, but validator returns only a 2-height
// verified proposal
const int kNumTransactions = 3;
std::vector<shared_model::proto::Transaction> txs;
for (int i = 0; i < kNumTransactions; ++i) {
txs.push_back(makeTx());
}
auto proposal = std::make_shared<shared_model::proto::Proposal>(
shared_model::proto::ProposalBuilder()
.height(3)
.createdTime(iroha::time::now())
.transactions(txs)
.build());
auto verified_proposal_and_errors =
std::make_unique<VerifiedProposalAndErrors>();
const shared_model::interface::types::HeightType verified_proposal_height = 2;
const std::vector<shared_model::proto::Transaction>
verified_proposal_transactions{txs[0]};
verified_proposal_and_errors->verified_proposal =
std::make_unique<shared_model::proto::Proposal>(
shared_model::proto::ProposalBuilder()
.height(verified_proposal_height)
.createdTime(iroha::time::now())
.transactions(verified_proposal_transactions)
.build());
for (auto rejected_tx = txs.begin() + 1; rejected_tx != txs.end();
++rejected_tx) {
verified_proposal_and_errors->rejected_transactions.emplace_back(
validation::TransactionError{
rejected_tx->hash(),
validation::CommandError{"SomeCommand", 1, "", true}});
}
shared_model::proto::Block block = makeBlock(proposal->height() - 1);
EXPECT_CALL(*factory, createTemporaryWsv()).Times(1);
EXPECT_CALL(*query, getTopBlock())
.WillOnce(Return(expected::makeValue(wBlock(clone(block)))));
EXPECT_CALL(*validator, validate(_, _))
.WillOnce(Invoke([&verified_proposal_and_errors](const auto &p, auto &v) {
return std::move(verified_proposal_and_errors);
}));
auto verification_result = simulator->processProposal(*proposal);
ASSERT_TRUE(verification_result);
auto verified_proposal = verification_result->get()->verified_proposal;
// ensure that txs in verified proposal do not include failed ones
EXPECT_EQ(verified_proposal->height(), verified_proposal_height);
EXPECT_EQ(verified_proposal->transactions(), verified_proposal_transactions);
EXPECT_TRUE(verification_result->get()->rejected_transactions.size()
== kNumTransactions - 1);
const auto verified_proposal_rejected_tx_hashes =
verification_result->get()->rejected_transactions
| boost::adaptors::transformed(
[](const auto &tx_error) { return tx_error.tx_hash; });
for (auto rejected_tx = txs.begin() + 1; rejected_tx != txs.end();
++rejected_tx) {
EXPECT_NE(boost::range::find(verified_proposal_rejected_tx_hashes,
rejected_tx->hash()),
boost::end(verified_proposal_rejected_tx_hashes))
<< rejected_tx->toString() << " missing in rejected transactions.";
}
}
| {
"pile_set_name": "Github"
} |
2006年08月30日 08:26:08
当日,云南石林风景区至昆明的高速公路上连续发生6起交通事故,共有21辆汽车相撞,目前已造成至少1人死亡,多人重伤。事故原因正在调查中。
| {
"pile_set_name": "Github"
} |
// Copyright 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package org.chromium.ui.base;
import android.annotation.TargetApi;
import android.app.Activity;
import android.content.ClipData;
import android.content.ContentResolver;
import android.content.Context;
import android.content.Intent;
import android.net.Uri;
import android.os.AsyncTask;
import android.os.Build;
import android.os.Environment;
import android.provider.MediaStore;
import android.text.TextUtils;
import android.util.Log;
import org.chromium.base.CalledByNative;
import org.chromium.base.ContentUriUtils;
import org.chromium.base.JNINamespace;
import org.chromium.ui.R;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
/**
* A dialog that is triggered from a file input field that allows a user to select a file based on
* a set of accepted file types. The path of the selected file is passed to the native dialog.
*/
@JNINamespace("ui")
class SelectFileDialog implements WindowAndroid.IntentCallback {
private static final String TAG = "SelectFileDialog";
private static final String IMAGE_TYPE = "image/";
private static final String VIDEO_TYPE = "video/";
private static final String AUDIO_TYPE = "audio/";
private static final String ALL_IMAGE_TYPES = IMAGE_TYPE + "*";
private static final String ALL_VIDEO_TYPES = VIDEO_TYPE + "*";
private static final String ALL_AUDIO_TYPES = AUDIO_TYPE + "*";
private static final String ANY_TYPES = "*/*";
private static final String CAPTURE_IMAGE_DIRECTORY = "browser-photos";
// Keep this variable in sync with the value defined in file_paths.xml.
private static final String IMAGE_FILE_PATH = "images";
private final long mNativeSelectFileDialog;
private List<String> mFileTypes;
private boolean mCapture;
private Uri mCameraOutputUri;
private SelectFileDialog(long nativeSelectFileDialog) {
mNativeSelectFileDialog = nativeSelectFileDialog;
}
/**
* Creates and starts an intent based on the passed fileTypes and capture value.
* @param fileTypes MIME types requested (i.e. "image/*")
* @param capture The capture value as described in http://www.w3.org/TR/html-media-capture/
* @param multiple Whether it should be possible to select multiple files.
* @param window The WindowAndroid that can show intents
*/
@TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR2)
@CalledByNative
private void selectFile(
String[] fileTypes, boolean capture, boolean multiple, WindowAndroid window) {
mFileTypes = new ArrayList<String>(Arrays.asList(fileTypes));
mCapture = capture;
Intent chooser = new Intent(Intent.ACTION_CHOOSER);
Intent camera = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
Context context = window.getApplicationContext();
camera.setFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION |
Intent.FLAG_GRANT_WRITE_URI_PERMISSION);
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2) {
mCameraOutputUri = ContentUriUtils.getContentUriFromFile(
context, getFileForImageCapture(context));
} else {
mCameraOutputUri = Uri.fromFile(getFileForImageCapture(context));
}
} catch (IOException e) {
Log.e(TAG, "Cannot retrieve content uri from file", e);
}
if (mCameraOutputUri == null) {
onFileNotSelected();
return;
}
camera.putExtra(MediaStore.EXTRA_OUTPUT, mCameraOutputUri);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2) {
camera.setClipData(
ClipData.newUri(context.getContentResolver(),
IMAGE_FILE_PATH, mCameraOutputUri));
}
Intent camcorder = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
Intent soundRecorder = new Intent(
MediaStore.Audio.Media.RECORD_SOUND_ACTION);
// Quick check - if the |capture| parameter is set and |fileTypes| has the appropriate MIME
// type, we should just launch the appropriate intent. Otherwise build up a chooser based on
// the accept type and then display that to the user.
if (captureCamera()) {
if (window.showIntent(camera, this, R.string.low_memory_error)) return;
} else if (captureCamcorder()) {
if (window.showIntent(camcorder, this, R.string.low_memory_error)) return;
} else if (captureMicrophone()) {
if (window.showIntent(soundRecorder, this, R.string.low_memory_error)) return;
}
Intent getContentIntent = new Intent(Intent.ACTION_GET_CONTENT);
getContentIntent.addCategory(Intent.CATEGORY_OPENABLE);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2 && multiple)
getContentIntent.putExtra(Intent.EXTRA_ALLOW_MULTIPLE, true);
ArrayList<Intent> extraIntents = new ArrayList<Intent>();
if (!noSpecificType()) {
// Create a chooser based on the accept type that was specified in the webpage. Note
// that if the web page specified multiple accept types, we will have built a generic
// chooser above.
if (shouldShowImageTypes()) {
extraIntents.add(camera);
getContentIntent.setType(ALL_IMAGE_TYPES);
} else if (shouldShowVideoTypes()) {
extraIntents.add(camcorder);
getContentIntent.setType(ALL_VIDEO_TYPES);
} else if (shouldShowAudioTypes()) {
extraIntents.add(soundRecorder);
getContentIntent.setType(ALL_AUDIO_TYPES);
}
}
if (extraIntents.isEmpty()) {
// We couldn't resolve an accept type, so fallback to a generic chooser.
getContentIntent.setType(ANY_TYPES);
extraIntents.add(camera);
extraIntents.add(camcorder);
extraIntents.add(soundRecorder);
}
chooser.putExtra(Intent.EXTRA_INITIAL_INTENTS,
extraIntents.toArray(new Intent[] { }));
chooser.putExtra(Intent.EXTRA_INTENT, getContentIntent);
if (!window.showIntent(chooser, this, R.string.low_memory_error)) {
onFileNotSelected();
}
}
/**
* Get a file for the image capture operation. For devices with JB MR2 or
* latter android versions, the file is put under IMAGE_FILE_PATH directory.
* For ICS devices, the file is put under CAPTURE_IMAGE_DIRECTORY.
*
* @param context The application context.
* @return file path for the captured image to be stored.
*/
private File getFileForImageCapture(Context context) throws IOException {
File path;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2) {
path = new File(context.getFilesDir(), IMAGE_FILE_PATH);
if (!path.exists() && !path.mkdir()) {
throw new IOException("Folder cannot be created.");
}
} else {
File externalDataDir = Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_DCIM);
path = new File(externalDataDir.getAbsolutePath() +
File.separator + CAPTURE_IMAGE_DIRECTORY);
if (!path.exists() && !path.mkdirs()) {
path = externalDataDir;
}
}
File photoFile = File.createTempFile(
String.valueOf(System.currentTimeMillis()), ".jpg", path);
return photoFile;
}
/**
* Callback method to handle the intent results and pass on the path to the native
* SelectFileDialog.
* @param window The window that has access to the application activity.
* @param resultCode The result code whether the intent returned successfully.
* @param contentResolver The content resolver used to extract the path of the selected file.
* @param results The results of the requested intent.
*/
@TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR2)
@Override
public void onIntentCompleted(WindowAndroid window, int resultCode,
ContentResolver contentResolver, Intent results) {
if (resultCode != Activity.RESULT_OK) {
onFileNotSelected();
return;
}
if (results == null) {
// If we have a successful return but no data, then assume this is the camera returning
// the photo that we requested.
// If the uri is a file, we need to convert it to the absolute path or otherwise
// android cannot handle it correctly on some earlier versions.
// http://crbug.com/423338.
String path = ContentResolver.SCHEME_FILE.equals(mCameraOutputUri.getScheme()) ?
mCameraOutputUri.getPath() : mCameraOutputUri.toString();
nativeOnFileSelected(mNativeSelectFileDialog, path,
mCameraOutputUri.getLastPathSegment());
// Broadcast to the media scanner that there's a new photo on the device so it will
// show up right away in the gallery (rather than waiting until the next time the media
// scanner runs).
window.sendBroadcast(new Intent(
Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, mCameraOutputUri));
return;
}
// Path for when EXTRA_ALLOW_MULTIPLE Intent extra has been defined. Each of the selected
// files will be shared as an entry on the Intent's ClipData. This functionality is only
// available in Android JellyBean MR2 and higher.
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2 &&
results.getData() == null &&
results.getClipData() != null) {
ClipData clipData = results.getClipData();
int itemCount = clipData.getItemCount();
if (itemCount == 0) {
onFileNotSelected();
return;
}
Uri[] filePathArray = new Uri[itemCount];
for (int i = 0; i < itemCount; ++i) {
filePathArray[i] = clipData.getItemAt(i).getUri();
}
GetDisplayNameTask task = new GetDisplayNameTask(contentResolver, true);
task.execute(filePathArray);
return;
}
if (ContentResolver.SCHEME_FILE.equals(results.getData().getScheme())) {
nativeOnFileSelected(mNativeSelectFileDialog,
results.getData().getSchemeSpecificPart(), "");
return;
}
if (ContentResolver.SCHEME_CONTENT.equals(results.getScheme())) {
GetDisplayNameTask task = new GetDisplayNameTask(contentResolver, false);
task.execute(results.getData());
return;
}
onFileNotSelected();
window.showError(R.string.opening_file_error);
}
private void onFileNotSelected() {
nativeOnFileNotSelected(mNativeSelectFileDialog);
}
private boolean noSpecificType() {
// We use a single Intent to decide the type of the file chooser we display to the user,
// which means we can only give it a single type. If there are multiple accept types
// specified, we will fallback to a generic chooser (unless a capture parameter has been
// specified, in which case we'll try to satisfy that first.
return mFileTypes.size() != 1 || mFileTypes.contains(ANY_TYPES);
}
private boolean shouldShowTypes(String allTypes, String specificType) {
if (noSpecificType() || mFileTypes.contains(allTypes)) return true;
return acceptSpecificType(specificType);
}
private boolean shouldShowImageTypes() {
return shouldShowTypes(ALL_IMAGE_TYPES, IMAGE_TYPE);
}
private boolean shouldShowVideoTypes() {
return shouldShowTypes(ALL_VIDEO_TYPES, VIDEO_TYPE);
}
private boolean shouldShowAudioTypes() {
return shouldShowTypes(ALL_AUDIO_TYPES, AUDIO_TYPE);
}
private boolean acceptsSpecificType(String type) {
return mFileTypes.size() == 1 && TextUtils.equals(mFileTypes.get(0), type);
}
private boolean captureCamera() {
return mCapture && acceptsSpecificType(ALL_IMAGE_TYPES);
}
private boolean captureCamcorder() {
return mCapture && acceptsSpecificType(ALL_VIDEO_TYPES);
}
private boolean captureMicrophone() {
return mCapture && acceptsSpecificType(ALL_AUDIO_TYPES);
}
private boolean acceptSpecificType(String accept) {
for (String type : mFileTypes) {
if (type.startsWith(accept)) {
return true;
}
}
return false;
}
private class GetDisplayNameTask extends AsyncTask<Uri, Void, String[]> {
String[] mFilePaths;
final ContentResolver mContentResolver;
final boolean mIsMultiple;
public GetDisplayNameTask(ContentResolver contentResolver, boolean isMultiple) {
mContentResolver = contentResolver;
mIsMultiple = isMultiple;
}
@Override
protected String[] doInBackground(Uri...uris) {
mFilePaths = new String[uris.length];
String[] displayNames = new String[uris.length];
try {
for (int i = 0; i < uris.length; i++) {
mFilePaths[i] = uris[i].toString();
displayNames[i] = ContentUriUtils.getDisplayName(
uris[i], mContentResolver, MediaStore.MediaColumns.DISPLAY_NAME);
}
} catch (SecurityException e) {
// Some third party apps will present themselves as being able
// to handle the ACTION_GET_CONTENT intent but then declare themselves
// as exported=false (or more often omit the exported keyword in
// the manifest which defaults to false after JB).
// In those cases trying to access the contents raises a security exception
// which we should not crash on. See crbug.com/382367 for details.
Log.w(TAG, "Unable to extract results from the content provider");
return null;
}
return displayNames;
}
@Override
protected void onPostExecute(String[] result) {
if (result == null) {
onFileNotSelected();
return;
}
if (mIsMultiple) {
nativeOnMultipleFilesSelected(mNativeSelectFileDialog, mFilePaths, result);
} else {
nativeOnFileSelected(mNativeSelectFileDialog, mFilePaths[0], result[0]);
}
}
}
@CalledByNative
private static SelectFileDialog create(long nativeSelectFileDialog) {
return new SelectFileDialog(nativeSelectFileDialog);
}
private native void nativeOnFileSelected(long nativeSelectFileDialogImpl,
String filePath, String displayName);
private native void nativeOnMultipleFilesSelected(long nativeSelectFileDialogImpl,
String[] filePathArray, String[] displayNameArray);
private native void nativeOnFileNotSelected(long nativeSelectFileDialogImpl);
}
| {
"pile_set_name": "Github"
} |
/*
Copyright (c) 2011, Intel Corporation. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of Intel Corporation nor the names of its contributors may
be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
********************************************************************************
* Content : Eigen bindings to Intel(R) MKL
* Real Schur needed to real unsymmetrical eigenvalues/eigenvectors.
********************************************************************************
*/
#ifndef EIGEN_REAL_SCHUR_MKL_H
#define EIGEN_REAL_SCHUR_MKL_H
#include "Eigen/src/Core/util/MKL_support.h"
namespace Eigen {
/** \internal Specialization for the data types supported by MKL */
#define EIGEN_MKL_SCHUR_REAL(EIGTYPE, MKLTYPE, MKLPREFIX, MKLPREFIX_U, EIGCOLROW, MKLCOLROW) \
template<> inline \
RealSchur<Matrix<EIGTYPE, Dynamic, Dynamic, EIGCOLROW> >& \
RealSchur<Matrix<EIGTYPE, Dynamic, Dynamic, EIGCOLROW> >::compute(const Matrix<EIGTYPE, Dynamic, Dynamic, EIGCOLROW>& matrix, bool computeU) \
{ \
eigen_assert(matrix.cols() == matrix.rows()); \
\
lapack_int n = matrix.cols(), sdim, info; \
lapack_int lda = matrix.outerStride(); \
lapack_int matrix_order = MKLCOLROW; \
char jobvs, sort='N'; \
LAPACK_##MKLPREFIX_U##_SELECT2 select = 0; \
jobvs = (computeU) ? 'V' : 'N'; \
m_matU.resize(n, n); \
lapack_int ldvs = m_matU.outerStride(); \
m_matT = matrix; \
Matrix<EIGTYPE, Dynamic, Dynamic> wr, wi; \
wr.resize(n, 1); wi.resize(n, 1); \
info = LAPACKE_##MKLPREFIX##gees( matrix_order, jobvs, sort, select, n, (MKLTYPE*)m_matT.data(), lda, &sdim, (MKLTYPE*)wr.data(), (MKLTYPE*)wi.data(), (MKLTYPE*)m_matU.data(), ldvs ); \
if(info == 0) \
m_info = Success; \
else \
m_info = NoConvergence; \
\
m_isInitialized = true; \
m_matUisUptodate = computeU; \
return *this; \
\
}
EIGEN_MKL_SCHUR_REAL(double, double, d, D, ColMajor, LAPACK_COL_MAJOR)
EIGEN_MKL_SCHUR_REAL(float, float, s, S, ColMajor, LAPACK_COL_MAJOR)
EIGEN_MKL_SCHUR_REAL(double, double, d, D, RowMajor, LAPACK_ROW_MAJOR)
EIGEN_MKL_SCHUR_REAL(float, float, s, S, RowMajor, LAPACK_ROW_MAJOR)
} // end namespace Eigen
#endif // EIGEN_REAL_SCHUR_MKL_H
| {
"pile_set_name": "Github"
} |
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs;
import java.util.Arrays;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.APITraceFileSystem;
import junit.framework.TestCase;
/**
* Test the APITraceFileSystem preserves functionality
*/
public class TestAPITraceFileSystem extends TestCase {
public void testGetPos() throws Exception {
Configuration conf = new Configuration();
MiniDFSCluster cluster;
FileSystem fs;
FSDataInputStream sin;
FSDataOutputStream sout;
Path testPath = new Path("/testAppend");
byte buf[] = "Hello, World!".getBytes("UTF-16");
byte buf2[] = new byte[buf.length];
conf.setClass("fs.hdfs.impl",
APITraceFileSystem.class,
FileSystem.class);
conf.setBoolean("dfs.support.append", true);
conf.setInt("dfs.replication", 1);
cluster = new MiniDFSCluster(conf, 1, true, null);
try {
cluster.waitActive();
fs = cluster.getFileSystem();
// create/write
sout = fs.create(testPath);
sout.write(buf, 0, buf.length);
sout.close();
// open/getPos()/readFully
sin = fs.open(testPath);
assertEquals(sin.getPos(), 0);
sin.readFully(0, buf2, 0, buf2.length);
assertTrue(Arrays.equals(buf, buf2));
sin.close();
// append/getPos()
sout = fs.append(testPath);
assertEquals(sout.getPos(), buf.length);
sout.close();
fs.close();
} finally {
cluster.shutdown();
}
}
}
| {
"pile_set_name": "Github"
} |
/*
* Software License Agreement (BSD License)
*
* Copyright (c) 2010-2012, Willow Garage, Inc.
* Copyright (c) 2009-2012, Urban Robotics, Inc.
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
* * Neither the name of the copyright holder(s) nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
* $Id$
*/
/* \author
* Jacob Schloss (jacob.schloss@urbanrobotics.net),
* Justin Rosen (jmylesrosen@gmail.com),
* Stephen Fox (foxstephend@gmail.com)
*/
#include <pcl/test/gtest.h>
#include <vector>
#include <iostream>
#include <random>
#include <pcl/common/time.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl/outofcore/outofcore.h>
#include <pcl/outofcore/outofcore_impl.h>
#include <pcl/PCLPointCloud2.h>
using namespace pcl::outofcore;
#include <boost/foreach.hpp>
/** \brief Unit tests for UR out of core octree code which test public interface of OutofcoreOctreeBase
*/
// For doing exhaustive checks this is set low remove those, and this can be
// set much higher
const static std::uint64_t numPts (10000);
constexpr std::uint32_t rngseed = 0xAAFF33DD;
const static boost::filesystem::path filename_otreeA = "treeA/tree_test.oct_idx";
const static boost::filesystem::path filename_otreeB = "treeB/tree_test.oct_idx";
const static boost::filesystem::path filename_otreeA_LOD = "treeA_LOD/tree_test.oct_idx";
const static boost::filesystem::path filename_otreeB_LOD = "treeB_LOD/tree_test.oct_idx";
const static boost::filesystem::path outofcore_path ("point_cloud_octree/tree_test.oct_idx");
using PointT = pcl::PointXYZ;
// UR Typedefs
using octree_disk = OutofcoreOctreeBase<OutofcoreOctreeDiskContainer < PointT > , PointT >;
using octree_disk_node = OutofcoreOctreeBaseNode<OutofcoreOctreeDiskContainer < PointT > , PointT >;
using octree_ram = OutofcoreOctreeBase<OutofcoreOctreeRamContainer< PointT> , PointT>;
using octree_ram_node = OutofcoreOctreeBaseNode<OutofcoreOctreeRamContainer<PointT> , PointT>;
using AlignedPointTVector = std::vector<PointT, Eigen::aligned_allocator<PointT> >;
AlignedPointTVector points;
/** \brief helper function to compare two points. is there a templated function in pcl to do this for arbitrary point types?*/
bool
compPt (const PointT &p1, const PointT &p2)
{
return !(p1.x != p2.x || p1.y != p2.y || p1.z != p2.z);
}
TEST (PCL, Outofcore_Octree_Build)
{
boost::filesystem::remove_all (filename_otreeA.parent_path ());
boost::filesystem::remove_all (filename_otreeB.parent_path ());
Eigen::Vector3d min (-32.0, -32.0, -32.0);
Eigen::Vector3d max (32.0, 32.0, 32.0);
// Build two trees using each constructor
// depth of treeA will be same as B because 1/2^3 > .1 and 1/2^4 < .1
// depth really affects performance
octree_disk treeA (min, max, .1, filename_otreeA, "ECEF");
octree_disk treeB (4, min, max, filename_otreeB, "ECEF");
// Equidistributed uniform pseudo-random number generator
std::mt19937 rng (rngseed);
// For testing sparse
//std::uniform_real_distribution<double> dist(0.0, 1.0);
// For testing less sparse
std::normal_distribution<float> dist (0.5f, .1f);
// Create a point
PointT p;
points.resize (numPts);
//ignore these fields from the UR point for now
// p.r = p.g = p.b = 0;
// p.nx = p.ny = p.nz = 1;
// p.cameraCount = 0;
// p.error = 0;
// p.triadID = 0;
// Radomize it's position in space
for (std::size_t i = 0; i < numPts; i++)
{
p.x = dist (rng);
p.y = dist (rng);
p.z = dist (rng);
points[i] = p;
}
// Add to tree
treeA.addDataToLeaf (points);
// Add to tree
treeB.addDataToLeaf (points);
}
TEST (PCL, Outofcore_Octree_Build_LOD)
{
boost::filesystem::remove_all (filename_otreeA_LOD.parent_path ());
boost::filesystem::remove_all (filename_otreeB_LOD.parent_path ());
Eigen::Vector3d min (0.0, 0.0, 0.0);
Eigen::Vector3d max (1.0, 1.0, 1.0);
// Build two trees using each constructor
octree_disk treeA (min, max, .1, filename_otreeA_LOD, "ECEF");
octree_disk treeB (4, min, max, filename_otreeB_LOD, "ECEF");
// Equidistributed uniform pseudo-random number generator
std::mt19937 rng (rngseed);
// For testing sparse
//std::uniform_real_distribution<double> dist(0.0, 1.0);
// For testing less sparse
std::normal_distribution<float> dist (0.5f, .1f);
// Create a point
PointT p;
/*
p.r = p.g = p.b = 0;
p.nx = p.ny = p.nz = 1;
p.cameraCount = 0;
p.error = 0;
p.triadID = 0;
*/
points.resize (numPts);
// Radomize it's position in space
for (std::size_t i = 0; i < numPts; i++)
{
p.x = dist (rng);
p.y = dist (rng);
p.z = dist (rng);
points[i] = p;
}
// Add to tree
treeA.addDataToLeaf_and_genLOD (points);
// Add to tree
treeB.addDataToLeaf_and_genLOD (points);
}
TEST(PCL, Outofcore_Bounding_Box)
{
Eigen::Vector3d min (-32.0,-32.0,-32.0);
Eigen::Vector3d max (32.0, 32.0, 32.0);
octree_disk treeA (filename_otreeA, false);
octree_disk treeB (filename_otreeB, false);
Eigen::Vector3d min_otreeA;
Eigen::Vector3d max_otreeA;
treeA.getBoundingBox (min_otreeA, max_otreeA);
Eigen::Vector3d min_otreeB;
Eigen::Vector3d max_otreeB;
treeB.getBoundingBox (min_otreeB, max_otreeB);
for (int i=0; i<3; i++)
{
//octree adds an epsilon to bounding box
EXPECT_LE (min_otreeA[i], min[i]);
EXPECT_NEAR (min_otreeA[i], min[i], 1e4);
EXPECT_GE (max_otreeA[i], max[i]);
EXPECT_NEAR (max_otreeA[i], max[i], 1e4);
EXPECT_LE (min_otreeB[i] , min[i]);
EXPECT_NEAR (min_otreeB[i], min[i], 1e4);
EXPECT_GE (max_otreeB[i] , max[i]);
EXPECT_NEAR (max_otreeB[i], max[i], 1e4);
}
}
void
point_test (octree_disk& t)
{
std::mt19937 rng (rngseed);
std::uniform_real_distribution<float> dist(0.0, 1.0);
Eigen::Vector3d query_box_min;
Eigen::Vector3d qboxmax;
for (int i = 0; i < 10; i++)
{
//std::cout << "query test round " << i << std::endl;
for (int j = 0; j < 3; j++)
{
query_box_min[j] = dist (rng);
qboxmax[j] = dist (rng);
if (qboxmax[j] < query_box_min[j])
{
std::swap (query_box_min[j], qboxmax[j]);
assert (query_box_min[j] < qboxmax[j]);
}
}
//query the trees
AlignedPointTVector p_ot;
t.queryBBIncludes (query_box_min, qboxmax, t.getDepth (), p_ot);
//query the list
AlignedPointTVector pointsinregion;
for (const auto &point : points)
{
if ((query_box_min[0] <= point.x) && (point.x < qboxmax[0]) && (query_box_min[1] < point.y) && (point.y < qboxmax[1]) && (query_box_min[2] <= point.z) && (point.z < qboxmax[2]))
{
pointsinregion.push_back (point);
}
}
EXPECT_EQ (p_ot.size (), pointsinregion.size ());
//very slow exhaustive comparison
while(!p_ot.empty ())
{
AlignedPointTVector::iterator it;
it = std::find_first_of (p_ot.begin (), p_ot.end(), pointsinregion.begin (), pointsinregion.end (), compPt);
if (it != p_ot.end ())
{
p_ot.erase (it);
}
else
{
FAIL () << "Dropped Point from tree1!" << std::endl;
break;
}
}
EXPECT_TRUE (p_ot.empty ());
}
}
TEST (PCL, Outofcore_Point_Query)
{
octree_disk treeA(filename_otreeA, false);
octree_disk treeB(filename_otreeB, false);
point_test(treeA);
point_test(treeB);
}
#if 0 //this class will be deprecated soon.
TEST (PCL, Outofcore_Ram_Tree)
{
Eigen::Vector3d min (0.0,0.0,0.0);
Eigen::Vector3d max (1.0, 1.0, 1.0);
const boost::filesystem::path filename_otreeA = "ram_tree/ram_tree.oct_idx";
octree_ram t (min, max, .1, filename_otreeA, "ECEF");
std::mt19937 rng (rngseed);
//std::uniform_real_distribution<double> dist(0.0, 1.0); //for testing sparse
std::normal_distribution<float> dist (0.5f, .1f); //for testing less sparse
PointT p;
points.resize (numPts);
for (std::size_t i = 0; i < numPts; i++)
{
p.x = dist(rng);
p.y = dist(rng);
p.z = dist(rng);
points[i] = p;
}
t.addDataToLeaf_and_genLOD (points);
//t.addDataToLeaf(points);
Eigen::Vector3d qboxmin;
Eigen::Vector3d qboxmax;
for (int i = 0; i < 10; i++)
{
//std::cout << "query test round " << i << std::endl;
for (int j = 0; j < 3; j++)
{
qboxmin[j] = dist (rng);
qboxmax[j] = dist (rng);
if (qboxmax[j] < qboxmin[j])
{
std::swap (qboxmin[j], qboxmax[j]);
}
}
//query the trees
AlignedPointTVector p_ot1;
t.queryBBIncludes (qboxmin, qboxmax, t.getDepth (), p_ot1);
//query the list
AlignedPointTVector pointsinregion;
for (const PointT& p : points)
{
if ((qboxmin[0] <= p.x) && (p.x <= qboxmax[0]) && (qboxmin[1] <= p.y) && (p.y <= qboxmax[1]) && (qboxmin[2] <= p.z) && (p.z <= qboxmax[2]))
{
pointsinregion.push_back (p);
}
}
EXPECT_EQ (p_ot1.size (), pointsinregion.size ());
//very slow exhaustive comparison
while (!p_ot1.empty ())
{
AlignedPointTVector::iterator it;
it = std::find_first_of (p_ot1.begin (), p_ot1.end (), pointsinregion.begin (), pointsinregion.end (), compPt);
if (it != p_ot1.end ())
{
p_ot1.erase(it);
}
else
{
break;
FAIL () << "Dropped Point from tree1!" << std::endl;
}
}
EXPECT_TRUE (p_ot1.empty ());
}
}
#endif
class OutofcoreTest : public testing::Test
{
protected:
OutofcoreTest () : smallest_voxel_dim () {}
void SetUp () override
{
smallest_voxel_dim = 3.0f;
}
void TearDown () override
{
}
void cleanUpFilesystem ()
{
//clear existing trees from test path
boost::filesystem::remove_all (filename_otreeA.parent_path ());
boost::filesystem::remove_all (filename_otreeB.parent_path ());
boost::filesystem::remove_all (filename_otreeA_LOD.parent_path ());
boost::filesystem::remove_all (filename_otreeB_LOD.parent_path ());
boost::filesystem::remove_all (outofcore_path.parent_path ());
}
double smallest_voxel_dim;
};
/** \brief Thorough test of the constructors, including exceptions and specified behavior */
TEST_F (OutofcoreTest, Outofcore_Constructors)
{
//Case 1: create octree on-disk by resolution
//Case 2: create octree on-disk by depth
//Case 3: try to create an octree in existing tree and handle exception
//Case 4: load existing octree from disk
//Case 5: try to load non-existent octree from disk
cleanUpFilesystem ();
//Specify the lower corner of the axis-aligned bounding box
const Eigen::Vector3d min (-1024.0, -1024.0, -1024.0);
//Specify the upper corner of the axis-aligned bounding box
const Eigen::Vector3d max (1024.0, 1024.0, 1024.0);
AlignedPointTVector some_points;
for (unsigned int i=0; i< numPts; i++)
some_points.push_back (PointT (static_cast<float>(rand () % 1024), static_cast<float>(rand () % 1024), static_cast<float>(rand () % 1024)));
//(Case 1)
//Create Octree based on resolution of smallest voxel, automatically computing depth
octree_disk octreeA (min, max, smallest_voxel_dim, filename_otreeA, "ECEF");
EXPECT_EQ (some_points.size (), octreeA.addDataToLeaf (some_points)) << "Dropped points in voxel resolution constructor\n";
EXPECT_EQ (some_points.size (), octreeA.getNumPointsAtDepth (octreeA.getDepth ()));
//(Case 2)
//create Octree by prespecified depth in constructor
int depth = 2;
octree_disk octreeB (depth, min, max, filename_otreeB, "ECEF");
EXPECT_EQ (some_points.size (), octreeB.addDataToLeaf (some_points)) << "Dropped points in fixed-depth constructor\n";
EXPECT_EQ (some_points.size (), octreeB.getNumPointsAtDepth (octreeB.getDepth ()));
}
TEST_F (OutofcoreTest, Outofcore_ConstructorSafety)
{
//Specify the lower corner of the axis-aligned bounding box
const Eigen::Vector3d min (-1024, -1024, -1024);
//Specify the upper corner of the axis-aligned bounding box
const Eigen::Vector3d max (1024, 1024, 1024);
int depth = 2;
//(Case 3) Constructor Safety. These should throw OCT_CHILD_EXISTS exceptions and write an error
//message of conflicting file path
ASSERT_TRUE (boost::filesystem::exists (filename_otreeA)) << "No tree detected on disk. This test will fail. Perhaps this test was run out of order.\n";
ASSERT_TRUE (boost::filesystem::exists (filename_otreeB)) << "No tree detected on disk. This test will fail. Perhaps this test was run out of order.\n";
EXPECT_ANY_THROW ({ octree_disk octreeC (min, max, smallest_voxel_dim, filename_otreeA, "ECEF"); }) << "Failure to detect existing tree on disk with the same name. Data may be overwritten.\n";
EXPECT_ANY_THROW ({ octree_disk octreeD (depth, min, max, filename_otreeB, "ECEF"); }) << "Failure to detect existing tree on disk with the same name. Data may be overwritten.\n";
//(Case 4): Load existing tree from disk
octree_disk octree_from_disk (filename_otreeB, true);
EXPECT_EQ (numPts , octree_from_disk.getNumPointsAtDepth (octree_from_disk.getDepth ())) << "Failure to count the number of points in a tree already existing on disk\n";
}
TEST_F (OutofcoreTest, Outofcore_ConstructorBadPaths)
{
//(Case 5): Try to load non-existent tree from disk
//root node should be created at this point
/// \todo Shouldn't these throw an exception for bad path?
boost::filesystem::path non_existent_path_name ("treeBogus/tree_bogus.oct_idx");
boost::filesystem::path bad_extension_path ("treeBadExtension/tree_bogus.bad_extension");
EXPECT_FALSE (boost::filesystem::exists (non_existent_path_name));
EXPECT_ANY_THROW ({octree_disk octree_bogus_path (non_existent_path_name, true);});
EXPECT_FALSE (boost::filesystem::exists (bad_extension_path));
EXPECT_ANY_THROW ({octree_disk octree_bad_extension (bad_extension_path, true);});
}
TEST_F (OutofcoreTest, Outofcore_PointcloudConstructor)
{
cleanUpFilesystem ();
//Specify the lower corner of the axis-aligned bounding box
const Eigen::Vector3d min (-1,-1,-1);
//Specify the upper corner of the axis-aligned bounding box
const Eigen::Vector3d max (1024, 1024, 1024);
//create a point cloud
pcl::PointCloud<PointT>::Ptr test_cloud (new pcl::PointCloud<PointT> ());
test_cloud->width = numPts;
test_cloud->height = 1;
test_cloud->reserve (numPts);
//generate some random points
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 1024),
static_cast<float> (i % 1024),
static_cast<float> (i % 1024));
test_cloud->points.push_back (tmp);
}
EXPECT_EQ (numPts, test_cloud->size ());
octree_disk pcl_cloud (4, min, max, outofcore_path, "ECEF");
pcl_cloud.addPointCloud (test_cloud);
EXPECT_EQ (test_cloud->size (), pcl_cloud.getNumPointsAtDepth (pcl_cloud.getDepth ()));
cleanUpFilesystem ();
}
TEST_F (OutofcoreTest, Outofcore_PointsOnBoundaries)
{
cleanUpFilesystem ();
const Eigen::Vector3d min (-1,-1,-1);
const Eigen::Vector3d max (1,1,1);
pcl::PointCloud<PointT>::Ptr cloud (new pcl::PointCloud<PointT> ());
cloud->width = 8;
cloud->height =1;
cloud->reserve (8);
for (int i=0; i<8; i++)
{
PointT tmp;
tmp.x = static_cast<float> (pow (-1.0, i)) * 1.0f;
tmp.y = static_cast<float> (pow (-1.0, i+1)) * 1.0f;
tmp.z = static_cast<float> (pow (-1.0, 3*i)) * 1.0f;
cloud->points.push_back (tmp);
}
octree_disk octree (4, min, max, outofcore_path, "ECEF");
octree.addPointCloud (cloud);
EXPECT_EQ (8, octree.getNumPointsAtDepth (octree.getDepth ()));
}
/*
TEST_F (OutofcoreTest, Outofcore_PointCloud2Basic)
{
cleanUpFilesystem ();
const double min[3] = { -1.0, -1.0, -1.0 };
const double max[3] = { 1.0, 1.0, 1.0 };
pcl::PCLPointCloud2::Ptr cloud (new pcl::PCLPointCloud2 ());
}
*/
TEST_F (OutofcoreTest, Outofcore_MultiplePointClouds)
{
cleanUpFilesystem ();
//Specify the lower corner of the axis-aligned bounding box
const Eigen::Vector3d min (-1024,-1024,-1024);
//Specify the upper corner of the axis-aligned bounding box
const Eigen::Vector3d max (1024,1024,1024);
//create a point cloud
pcl::PointCloud<PointT>::Ptr test_cloud (new pcl::PointCloud<PointT> ());
pcl::PointCloud<PointT>::Ptr second_cloud (new pcl::PointCloud<PointT> ());
test_cloud->width = numPts;
test_cloud->height = 1;
test_cloud->reserve (numPts);
second_cloud->width = numPts;
second_cloud->height = 1;
second_cloud->reserve (numPts);
//generate some random points
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 1024),
static_cast<float> (i % 1024),
static_cast<float> (i % 1024));
test_cloud->points.push_back (tmp);
}
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 1024),
static_cast<float> (i % 1024),
static_cast<float> (i % 1024));
second_cloud->points.push_back (tmp);
}
octree_disk pcl_cloud (4, min, max, outofcore_path, "ECEF");
ASSERT_EQ (test_cloud->size (), pcl_cloud.addPointCloud (test_cloud)) << "Points lost when adding the first cloud to the tree\n";
ASSERT_EQ (numPts, pcl_cloud.getNumPointsAtDepth (pcl_cloud.getDepth ())) << "Book keeping of number of points at query depth does not match number of points inserted to the leaves\n";
pcl_cloud.addPointCloud (second_cloud);
EXPECT_EQ (2*numPts, pcl_cloud.getNumPointsAtDepth (pcl_cloud.getDepth ())) << "Points are lost when two points clouds are added to the outofcore file system\n";
pcl_cloud.setSamplePercent (0.125);
pcl_cloud.buildLOD ();
//check that there is at least one point in each LOD
for (std::size_t i=0; i<pcl_cloud.getDepth (); i++)
EXPECT_GE (pcl_cloud.getNumPointsAtDepth (i), 1) << "No points in the LOD indicates buildLOD failed\n";
EXPECT_EQ (2*numPts, pcl_cloud.getNumPointsAtDepth (pcl_cloud.getDepth ())) << "Points in leaves were lost while building LOD!\n";
cleanUpFilesystem ();
}
TEST_F (OutofcoreTest, Outofcore_PointCloudInput_LOD)
{
cleanUpFilesystem ();
//Specify the lower corner of the axis-aligned bounding box
const Eigen::Vector3d min (-1024,-1024,-1024);
//Specify the upper corner of the axis-aligned bounding box
const Eigen::Vector3d max (1024,1024,1024);
//create a point cloud
pcl::PointCloud<PointT>::Ptr test_cloud (new pcl::PointCloud<PointT> ());
pcl::PointCloud<PointT>::Ptr second_cloud (new pcl::PointCloud<PointT> ());
test_cloud->width = numPts;
test_cloud->height = 1;
test_cloud->reserve (numPts);
second_cloud->width = numPts;
second_cloud->height = 1;
second_cloud->reserve (numPts);
//generate some random points
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 1024),
static_cast<float> (i % 1024),
static_cast<float> (i % 1024));
test_cloud->points.push_back (tmp);
}
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 1024),
static_cast<float> (i % 1024),
static_cast<float> (i % 1024));
second_cloud->points.push_back (tmp);
}
octree_disk pcl_cloud (4, min, max, outofcore_path, "ECEF");
pcl_cloud.addPointCloud_and_genLOD (second_cloud);
// EXPECT_EQ (2*numPts, pcl_cloud.getNumPointsAtDepth (pcl_cloud.getDepth ())) << "Points are lost when two points clouds are added to the outofcore file system\n";
cleanUpFilesystem ();
}
TEST_F (OutofcoreTest, PointCloud2_Constructors)
{
cleanUpFilesystem ();
//Specify the bounding box of the point clouds
const Eigen::Vector3d min (-100.1, -100.1, -100.1);
const Eigen::Vector3d max (100.1, 100.1, 100.1);
const std::uint64_t depth = 2;
//create a point cloud
pcl::PointCloud<PointT>::Ptr test_cloud (new pcl::PointCloud<PointT> ());
test_cloud->width = numPts;
test_cloud->height = 1;
test_cloud->reserve (numPts);
//generate some random points
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 200) - 99 ,
static_cast<float> (i % 200) - 99,
static_cast<float> (i % 200) - 99);
test_cloud->points.push_back (tmp);
}
pcl::PCLPointCloud2::Ptr point_cloud (new pcl::PCLPointCloud2);
pcl::toPCLPointCloud2 (*test_cloud, *point_cloud);
octree_disk octreeA (depth, min, max, filename_otreeA, "ECEF");
octree_disk octreeB (depth, min, max, filename_otreeB, "ECEF");
EXPECT_EQ (octreeA.addPointCloud (point_cloud, false) , point_cloud->width*point_cloud->height) << "Number of points returned by constructor indicates some points were not properly inserted to the outofcore cloud\n";
EXPECT_EQ (octreeB.addPointCloud_and_genLOD (point_cloud), point_cloud->width*point_cloud->height) << "Number of points inserted when generating LOD does not match the size of the point cloud\n";
}
TEST_F (OutofcoreTest, PointCloud2_Insertion)
{
cleanUpFilesystem ();
const Eigen::Vector3d min (-11, -11, -11);
const Eigen::Vector3d max (11,11,11);
pcl::PointCloud<pcl::PointXYZ> point_cloud;
point_cloud.reserve (numPts);
point_cloud.width = static_cast<std::uint32_t> (numPts);
point_cloud.height = 1;
for (std::size_t i=0; i < numPts; i++)
point_cloud.emplace_back(static_cast<float>(rand () % 10), static_cast<float>(rand () % 10), static_cast<float>(rand () % 10));
pcl::PCLPointCloud2::Ptr input_cloud (new pcl::PCLPointCloud2 ());
pcl::toPCLPointCloud2<pcl::PointXYZ> (point_cloud, *input_cloud);
ASSERT_EQ (point_cloud.width*point_cloud.height, input_cloud->width*input_cloud->height);
octree_disk octreeA (min, max, smallest_voxel_dim, filename_otreeA, "ECEF");
octree_disk octreeB (1, min, max, filename_otreeB, "ECEF");
//make sure the number of points successfully added are the same as how many we input
std::uint64_t points_in_input_cloud = input_cloud->width*input_cloud->height;
EXPECT_EQ (octreeA.addPointCloud (input_cloud, false), points_in_input_cloud) << "Insertion failure. Number of points successfully added does not match size of input cloud\n";
EXPECT_EQ (octreeB.addPointCloud (input_cloud, false), points_in_input_cloud) << "Insertion failure. Number of points successfully added does not match size of input cloud\n";
}
TEST_F (OutofcoreTest, PointCloud2_MultiplePointCloud)
{
cleanUpFilesystem ();
//Specify the bounding box of the point clouds
const Eigen::Vector3d min (-100.1, -100.1, -100.1);
const Eigen::Vector3d max (100.1, 100.1, 100.1);
//create a point cloud
pcl::PointCloud<PointT>::Ptr first_cloud (new pcl::PointCloud<PointT> ());
pcl::PointCloud<PointT>::Ptr second_cloud (new pcl::PointCloud<PointT> ());
first_cloud->width = numPts;
first_cloud->height = 1;
first_cloud->reserve (numPts);
second_cloud->width = numPts;
second_cloud->height = 1;
second_cloud->reserve (numPts);
//generate some random points
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 50),
static_cast<float> (i % 50),
static_cast<float> (i % 50));
first_cloud->points.push_back (tmp);
}
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 50),
static_cast<float> (i % 50),
static_cast<float> (i % 50));
second_cloud->points.push_back (tmp);
}
pcl::PCLPointCloud2::Ptr first_cloud_ptr (new pcl::PCLPointCloud2 ());
pcl::PCLPointCloud2::Ptr second_cloud_ptr (new pcl::PCLPointCloud2 ());
pcl::toPCLPointCloud2<PointT> (*first_cloud, *first_cloud_ptr);
pcl::toPCLPointCloud2<PointT> (*second_cloud, *second_cloud_ptr);
//Create an outofcore tree which just concatenates the two clouds into a single PCD in the root node. Check that the number of points is correct.
octree_disk shallow_outofcore (0/*depth*/, min, max, filename_otreeB, "ECEF");
shallow_outofcore.addPointCloud (first_cloud);
shallow_outofcore.addPointCloud (second_cloud);
pcl::PCLPointCloud2::Ptr result (new pcl::PCLPointCloud2 ());
shallow_outofcore.queryBBIncludes (min, max, 0, result);
std::size_t num_points_queried = result->width*result->height;
std::size_t num_points_inserted = first_cloud->width*first_cloud->height + second_cloud->width*second_cloud->height;
EXPECT_EQ (num_points_inserted, num_points_queried) << "If num_points_inserted > num_points_on_disk, then points were dropped on insertion of multiple clouds in the outofcore octree";
}
TEST_F (OutofcoreTest, PointCloud2_QueryBoundingBox)
{
cleanUpFilesystem ();
//Specify the bounding box of the point clouds
const Eigen::Vector3d min (-100.1, -100.1, -100.1);
const Eigen::Vector3d max (100.1, 100.1, 100.1);
const std::uint64_t depth = 2;
//create a point cloud
pcl::PointCloud<PointT>::Ptr test_cloud (new pcl::PointCloud<PointT> ());
test_cloud->width = numPts;
test_cloud->height = 1;
test_cloud->reserve (numPts);
//generate some random points
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 50) - 50 ,
static_cast<float> (i % 50) - 50,
static_cast<float> (i % 50) - 50);
test_cloud->points.push_back (tmp);
}
pcl::PCLPointCloud2::Ptr dst_blob (new pcl::PCLPointCloud2 ());
pcl::toPCLPointCloud2 (*test_cloud, *dst_blob);
octree_disk octreeA (depth, min, max, filename_otreeA, "ECEF");
octree_disk octreeB (depth, min, max, filename_otreeB, "ECEF");
std::uint64_t points_added = octreeA.addPointCloud (dst_blob, false);
EXPECT_EQ (points_added, dst_blob->width*dst_blob->height);
pcl::PCLPointCloud2::Ptr dst_blob2 (new pcl::PCLPointCloud2 ());
octreeA.queryBoundingBox (min, max, 2, dst_blob2);
std::list<std::string> filenames;
octreeA.queryBoundingBox (min, max, 2, filenames);
EXPECT_GE (filenames.size (), 1);
octreeA.queryBoundingBox (min, max, 2, dst_blob2, 0.125);
EXPECT_GE (dst_blob2->width*dst_blob2->height, 1);
cleanUpFilesystem ();
}
//test that the PCLPointCloud2 query returns the same points as the templated queries
TEST_F (OutofcoreTest, PointCloud2_Query)
{
cleanUpFilesystem ();
//Specify the bounding box of the point clouds
const Eigen::Vector3d min (-100.1, -100.1, -100.1);
const Eigen::Vector3d max (100.1, 100.1, 100.1);
const std::uint64_t depth = 2;
//create a point cloud
pcl::PointCloud<PointT>::Ptr test_cloud (new pcl::PointCloud<PointT> ());
test_cloud->width = numPts;
test_cloud->height = 1;
test_cloud->reserve (numPts);
//generate some random points
for (std::size_t i=0; i < numPts; i++)
{
PointT tmp (static_cast<float> (i % 50) - 50 ,
static_cast<float> (i % 50) - 50,
static_cast<float> (i % 50) - 50);
test_cloud->points.push_back (tmp);
}
pcl::PCLPointCloud2::Ptr dst_blob (new pcl::PCLPointCloud2 ());
pcl::toPCLPointCloud2 (*test_cloud, *dst_blob);
octree_disk octreeA (depth, min, max, filename_otreeA, "ECEF");
octree_disk octreeB (depth, min, max, filename_otreeB, "ECEF");
std::uint64_t points_added = octreeA.addPointCloud (dst_blob, false);
std::uint64_t LOD_points_added = octreeB.addPointCloud_and_genLOD (dst_blob);
ASSERT_EQ (points_added, dst_blob->width*dst_blob->height) << "Number of points returned by addPointCloud does not match the number of poitns in the input point cloud\n";
ASSERT_EQ (LOD_points_added, dst_blob->width*dst_blob->height) << "Number of points returned by addPointCloud_and_genLOD does not match the number of points in the input point cloud\n";
pcl::PCLPointCloud2::Ptr query_result_a (new pcl::PCLPointCloud2 ());
pcl::PCLPointCloud2::Ptr query_result_b (new pcl::PCLPointCloud2 ());
octreeA.queryBBIncludes (min, max, int (octreeA.getDepth ()), query_result_a);
EXPECT_EQ (test_cloud->width*test_cloud->height, query_result_a->width*query_result_a->height) << "PCLPointCloud2 Query number of points returned failed\n";
std::uint64_t total_octreeB_LOD_query = 0;
for (std::uint64_t i=0; i <= octreeB.getDepth (); i++)
{
octreeB.queryBBIncludes (min, max, i, query_result_b);
total_octreeB_LOD_query += query_result_b->width*query_result_b->height;
query_result_b->data.clear ();
query_result_b->width =0;
query_result_b->height =0;
}
EXPECT_EQ (test_cloud->width*test_cloud->height, total_octreeB_LOD_query) << "PCLPointCloud2 Query number of points returned failed\n";
cleanUpFilesystem ();
}
/* [--- */
int
main (int argc, char** argv)
{
// pcl::console::setVerbosityLevel (pcl::console::L_VERBOSE);
testing::InitGoogleTest (&argc, argv);
return (RUN_ALL_TESTS ());
}
/* ]--- */
| {
"pile_set_name": "Github"
} |
'use strict';
Object.defineProperty(exports, "__esModule", { value: true });
console.log("Using global.ethers");
var anyGlobal = window;
var ethers = anyGlobal._ethers;
exports.ethers = ethers;
//# sourceMappingURL=browser-ethers.js.map | {
"pile_set_name": "Github"
} |
Original code is released under Apache 2.0 (See LICENSE for more details).
Other third party source-code components used in this repository are documented as follows:
https://github.com/sgorsten/linalg (Unlicense)
Authored by Sterling G. Orsten, 2016
https://github.com/sgorsten/json (Unlicense)
Authored by Sterling G. Orsten, 2016
https://github.com/melax/sandbox (MIT)
Copyright (C) Stan Melax, 2014
http://www.glfw.org/ (zlib/libpng)
Copyright (C) Camilla Berglund, 2006+
https://github.com/nigels-com/glew (3-Clause BSD)
Copyright (C) The GLEW Authors, 2002+
https://github.com/ValveSoftware/openvr (3-Clause BSD)
Copyright (C) Valve Corporation, 2015
https://github.com/IntelRealSense/librealsense (Apache 2.0)
Copyright (C) Intel Corporation, 2015
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2017 47 Degrees, LLC. <http://www.47deg.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package cards.nine.repository.repositories
import cards.nine.commons.CatchAll
import cards.nine.commons.contentresolver.Conversions._
import cards.nine.commons.contentresolver.NotificationUri._
import cards.nine.commons.contentresolver.{ContentResolverWrapper, UriCreator}
import cards.nine.commons.services.TaskService
import cards.nine.commons.services.TaskService.TaskService
import cards.nine.models.IterableCursor
import cards.nine.models.IterableCursor._
import cards.nine.repository.Conversions.toCard
import cards.nine.repository.model.{Card, CardData, CardsWithCollectionId}
import cards.nine.repository.provider.CardEntity._
import cards.nine.repository.provider.NineCardsUri._
import cards.nine.repository.provider.{CardEntity, NineCardsUri}
import cards.nine.repository.repositories.RepositoryUtils._
import cards.nine.repository.{ImplicitsRepositoryExceptions, RepositoryException}
import scala.language.postfixOps
class CardRepository(contentResolverWrapper: ContentResolverWrapper, uriCreator: UriCreator)
extends ImplicitsRepositoryExceptions {
val cardUri = uriCreator.parse(cardUriString)
val cardNotificationUri = uriCreator.parse(s"$baseUriNotificationString/$cardUriPath")
val collectionNotificationUri =
uriCreator.parse(s"$baseUriNotificationString/$collectionUriPath")
def addCard(collectionId: Int, data: CardData): TaskService[Card] =
TaskService {
CatchAll[RepositoryException] {
val values = createMapValues(data) + (CardEntity.collectionId -> collectionId)
val id = contentResolverWrapper.insert(
uri = cardUri,
values = values,
notificationUris = Seq(
cardNotificationUri,
uriCreator.withAppendedPath(collectionNotificationUri, collectionId.toString)))
Card(id = id, data = data)
}
}
def addCards(datas: Seq[CardsWithCollectionId]): TaskService[Seq[Card]] =
TaskService {
CatchAll[RepositoryException] {
val values = datas flatMap { dataWithCollectionId =>
dataWithCollectionId.data map { data =>
createMapValues(data) +
(CardEntity.collectionId -> dataWithCollectionId.collectionId)
}
}
val collectionNotificationUris = datas.map(_.collectionId).distinct.map { id =>
uriCreator.withAppendedPath(collectionNotificationUri, id.toString)
}
val ids = contentResolverWrapper.inserts(
authority = NineCardsUri.authorityPart,
uri = cardUri,
allValues = values,
notificationUris = collectionNotificationUris :+ cardNotificationUri)
(datas flatMap (_.data)) zip ids map {
case (data, id) => Card(id = id, data = data)
}
}
}
def deleteCards(maybeCollectionId: Option[Int] = None, where: String = ""): TaskService[Int] =
TaskService {
CatchAll[RepositoryException] {
val collectionUri = maybeCollectionId match {
case Some(id) if id != 0 =>
uriCreator.withAppendedPath(collectionNotificationUri, id.toString)
case _ => collectionNotificationUri
}
contentResolverWrapper.delete(
uri = cardUri,
where = where,
notificationUris = Seq(cardNotificationUri, collectionUri))
}
}
def deleteCard(collectionId: Int, cardId: Int): TaskService[Int] =
TaskService {
CatchAll[RepositoryException] {
contentResolverWrapper.deleteById(
uri = cardUri,
id = cardId,
notificationUris = Seq(
cardNotificationUri,
uriCreator.withAppendedPath(collectionNotificationUri, collectionId.toString)))
}
}
def findCardById(id: Int): TaskService[Option[Card]] =
TaskService {
CatchAll[RepositoryException] {
contentResolverWrapper.findById(uri = cardUri, id = id, projection = allFields)(
getEntityFromCursor(cardEntityFromCursor)) map toCard
}
}
def fetchCardsByCollection(collectionId: Int): TaskService[Seq[Card]] =
TaskService {
CatchAll[RepositoryException] {
contentResolverWrapper.fetchAll(
uri = cardUri,
projection = allFields,
where = s"${CardEntity.collectionId} = ?",
whereParams = Seq(collectionId.toString),
orderBy = s"${CardEntity.position} asc")(getListFromCursor(cardEntityFromCursor)) map toCard
}
}
def fetchCards: TaskService[Seq[Card]] =
TaskService {
CatchAll[RepositoryException] {
contentResolverWrapper.fetchAll(uri = cardUri, projection = allFields)(
getListFromCursor(cardEntityFromCursor)) map toCard
}
}
def fetchIterableCards(
where: String = "",
whereParams: Seq[String] = Seq.empty,
orderBy: String = ""): TaskService[IterableCursor[Card]] =
TaskService {
CatchAll[RepositoryException] {
contentResolverWrapper
.getCursor(
uri = cardUri,
projection = allFields,
where = where,
whereParams = whereParams,
orderBy = orderBy)
.toIterator(cardFromCursor)
}
}
def updateCard(card: Card): TaskService[Int] =
TaskService {
CatchAll[RepositoryException] {
val values = createMapValues(card.data)
contentResolverWrapper.updateById(
uri = cardUri,
id = card.id,
values = values,
notificationUris = Seq(cardNotificationUri))
}
}
def updateCards(cards: Seq[Card]): TaskService[Seq[Int]] =
TaskService {
CatchAll[RepositoryException] {
val values = cards map { card =>
(card.id, createMapValues(card.data))
}
contentResolverWrapper.updateByIds(
authority = NineCardsUri.authorityPart,
uri = cardUri,
idAndValues = values,
notificationUris = Seq(cardNotificationUri))
}
}
private[this] def createMapValues(data: CardData) =
Map[String, Any](
position -> data.position,
term -> data.term,
packageName -> flatOrNull(data.packageName),
cardType -> data.cardType,
intent -> data.intent,
imagePath -> flatOrNull(data.imagePath),
notification -> flatOrNull(data.notification))
}
| {
"pile_set_name": "Github"
} |
/*
Copyright (C) 2014-2019 de4dot@gmail.com
This file is part of dnSpy
dnSpy is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
dnSpy is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with dnSpy. If not, see <http://www.gnu.org/licenses/>.
*/
using System.ComponentModel;
namespace dnSpy.Roslyn.Compiler.VisualBasic {
/// <summary>
/// Visual Basic compiler settings
/// </summary>
abstract class VisualBasicCompilerSettings : INotifyPropertyChanged {
/// <summary>
/// Raised when a property is changed
/// </summary>
public event PropertyChangedEventHandler? PropertyChanged;
/// <summary>
/// Raises <see cref="PropertyChanged"/>
/// </summary>
/// <param name="propName">Name of property that got changed</param>
protected void OnPropertyChanged(string propName) => PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propName));
/// <summary>
/// Conditional compilation symbols, separated by ';' or ','. Key=Value pairs are allowed, but only a limited set of values
/// are supported (<see cref="bool"/>, <see cref="int"/>, <see cref="double"/>, <see cref="string"/>). String values can have double quotes.
/// </summary>
public abstract string PreprocessorSymbols { get; set; }
/// <summary>
/// Optimize the code (release builds)
/// </summary>
public abstract bool Optimize { get; set; }
/// <summary>
/// Require explicit declaration of variables
/// </summary>
public abstract bool OptionExplicit { get; set; }
/// <summary>
/// Allow type inference of variables
/// </summary>
public abstract bool OptionInfer { get; set; }
/// <summary>
/// Enforce strict language semantics
/// </summary>
public abstract bool OptionStrict { get; set; }
/// <summary>
/// true to use binary-style string comparisons, false to use text-style string comparisons
/// </summary>
public abstract bool OptionCompareBinary { get; set; }
/// <summary>
/// true to always embed the VB runtime, false to use the default behavior (either use the runtime in the GAC or
/// embed it depending on the target framework)
/// </summary>
public abstract bool EmbedVBRuntime { get; set; }
}
}
| {
"pile_set_name": "Github"
} |
/*
* Generic PowerPC 44x RNG driver
*
* Copyright 2011 IBM Corporation
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; version 2 of the License.
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/hw_random.h>
#include <linux/delay.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/io.h>
#include "crypto4xx_core.h"
#include "crypto4xx_trng.h"
#include "crypto4xx_reg_def.h"
#define PPC4XX_TRNG_CTRL 0x0008
#define PPC4XX_TRNG_CTRL_DALM 0x20
#define PPC4XX_TRNG_STAT 0x0004
#define PPC4XX_TRNG_STAT_B 0x1
#define PPC4XX_TRNG_DATA 0x0000
static int ppc4xx_trng_data_present(struct hwrng *rng, int wait)
{
struct crypto4xx_device *dev = (void *)rng->priv;
int busy, i, present = 0;
for (i = 0; i < 20; i++) {
busy = (in_le32(dev->trng_base + PPC4XX_TRNG_STAT) &
PPC4XX_TRNG_STAT_B);
if (!busy || !wait) {
present = 1;
break;
}
udelay(10);
}
return present;
}
static int ppc4xx_trng_data_read(struct hwrng *rng, u32 *data)
{
struct crypto4xx_device *dev = (void *)rng->priv;
*data = in_le32(dev->trng_base + PPC4XX_TRNG_DATA);
return 4;
}
static void ppc4xx_trng_enable(struct crypto4xx_device *dev, bool enable)
{
u32 device_ctrl;
device_ctrl = readl(dev->ce_base + CRYPTO4XX_DEVICE_CTRL);
if (enable)
device_ctrl |= PPC4XX_TRNG_EN;
else
device_ctrl &= ~PPC4XX_TRNG_EN;
writel(device_ctrl, dev->ce_base + CRYPTO4XX_DEVICE_CTRL);
}
static const struct of_device_id ppc4xx_trng_match[] = {
{ .compatible = "ppc4xx-rng", },
{ .compatible = "amcc,ppc460ex-rng", },
{ .compatible = "amcc,ppc440epx-rng", },
{},
};
void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev)
{
struct crypto4xx_device *dev = core_dev->dev;
struct device_node *trng = NULL;
struct hwrng *rng = NULL;
int err;
/* Find the TRNG device node and map it */
trng = of_find_matching_node(NULL, ppc4xx_trng_match);
if (!trng || !of_device_is_available(trng))
return;
dev->trng_base = of_iomap(trng, 0);
of_node_put(trng);
if (!dev->trng_base)
goto err_out;
rng = kzalloc(sizeof(*rng), GFP_KERNEL);
if (!rng)
goto err_out;
rng->name = KBUILD_MODNAME;
rng->data_present = ppc4xx_trng_data_present;
rng->data_read = ppc4xx_trng_data_read;
rng->priv = (unsigned long) dev;
core_dev->trng = rng;
ppc4xx_trng_enable(dev, true);
out_le32(dev->trng_base + PPC4XX_TRNG_CTRL, PPC4XX_TRNG_CTRL_DALM);
err = devm_hwrng_register(core_dev->device, core_dev->trng);
if (err) {
ppc4xx_trng_enable(dev, false);
dev_err(core_dev->device, "failed to register hwrng (%d).\n",
err);
goto err_out;
}
return;
err_out:
of_node_put(trng);
iounmap(dev->trng_base);
kfree(rng);
dev->trng_base = NULL;
core_dev->trng = NULL;
}
void ppc4xx_trng_remove(struct crypto4xx_core_device *core_dev)
{
if (core_dev && core_dev->trng) {
struct crypto4xx_device *dev = core_dev->dev;
devm_hwrng_unregister(core_dev->device, core_dev->trng);
ppc4xx_trng_enable(dev, false);
iounmap(dev->trng_base);
kfree(core_dev->trng);
}
}
MODULE_ALIAS("ppc4xx_rng");
| {
"pile_set_name": "Github"
} |
import Vue from "vue";
import Vuex from "vuex";
Vue.use(Vuex);
export default new Vuex.Store({
state: {},
mutations: {},
actions: {},
modules: {}
});
| {
"pile_set_name": "Github"
} |
{ stdenv, fetchurl, fetchFromGitHub, makeWrapper
, meson
, ninja
, pkg-config
, fetchpatch
, platform-tools
, ffmpeg
, SDL2
}:
let
version = "1.15.1";
prebuilt_server = fetchurl {
url = "https://github.com/Genymobile/scrcpy/releases/download/v${version}/scrcpy-server-v${version}";
sha256 = "1hrp2rfwl06ff2b2i12ccka58l1brvn6xqgm1f38k36s61mbs1py";
};
in
stdenv.mkDerivation rec {
pname = "scrcpy";
inherit version;
src = fetchFromGitHub {
owner = "Genymobile";
repo = pname;
rev = "v${version}";
sha256 = "0ijar1cycj42p39cgpnwdwr6nz5pyr6vacr1gvc0f6k92pl8vr13";
};
# postPatch:
# screen.c: When run without a hardware accelerator, this allows the command to continue working rather than failing unexpectedly.
# This can happen when running on non-NixOS because then scrcpy seems to have a hard time using the host OpenGL-supporting hardware.
# It would be better to fix the OpenGL problem, but that seems much more intrusive.
postPatch = ''
substituteInPlace app/src/screen.c \
--replace "SDL_RENDERER_ACCELERATED" "SDL_RENDERER_ACCELERATED || SDL_RENDERER_SOFTWARE"
'';
nativeBuildInputs = [ makeWrapper meson ninja pkg-config ];
buildInputs = [ ffmpeg SDL2 ];
# Manually install the server jar to prevent Meson from "fixing" it
preConfigure = ''
echo -n > server/meson.build
'';
mesonFlags = [ "-Doverride_server_path=${prebuilt_server}" ];
postInstall = ''
mkdir -p "$out/share/scrcpy"
ln -s "${prebuilt_server}" "$out/share/scrcpy/scrcpy-server"
# runtime dep on `adb` to push the server
wrapProgram "$out/bin/scrcpy" --prefix PATH : "${platform-tools}/bin"
'';
meta = with stdenv.lib; {
description = "Display and control Android devices over USB or TCP/IP";
homepage = "https://github.com/Genymobile/scrcpy";
license = licenses.asl20;
platforms = platforms.unix;
maintainers = with maintainers; [ deltaevo lukeadams ];
};
}
| {
"pile_set_name": "Github"
} |
/*
ucg_dev_oled_128x128_ilsoft.c
Specific code for the ILSOFT 128x128 OLED module
Universal uC Color Graphics Library
Copyright (c) 2014, olikraus@gmail.com
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "ucg.h"
//static const uint8_t ucg_dev_ssd1351_128x128_init_seq[] PROGMEM = {
static const ucg_pgm_uint8_t ucg_ilsoft_ssd1351_init_seq[] = {
UCG_CFG_CD(0,1), /* DC=0 for command mode, DC=1 for data and args */
UCG_RST(1),
UCG_CS(1), /* disable chip */
UCG_DLY_MS(1),
UCG_RST(0),
UCG_DLY_MS(1),
UCG_RST(1),
UCG_DLY_MS(50),
UCG_CS(0), /* enable chip */
//UCG_C11(0x0fd, 0x012), /* Unlock normal commands, reset default: unlocked */
UCG_C11(0x0fd, 0x0b1), /* Unlock extra commands, reset default: locked */
//UCG_C10(0x0ae), /* Set Display Off */
UCG_C10(0x0af), /* Set Display On */
UCG_C10(0x0a6), /* Set Display Mode Reset */
UCG_C11(0x0a0, 0x0b4), /* Set Colour Depth */
UCG_C11(0x0a1, 0x000), /* Set Display Start Line */
UCG_C11(0x0a2, 0x000), /* Set Display Offset */
UCG_C12(0x015, 0x000, 0x07f), /* Set Column Address */
UCG_C12(0x075, 0x000, 0x07f), /* Set Row Address */
UCG_C11(0x0b3, 0x0f1), /* Front Clock Div */
//UCG_C11(0x0ca, 0x07f), /* Set Multiplex Ratio, reset default: 0x7f */
UCG_C11(0x0b5, 0x000), /* Set GPIO */
//UCG_C11(0x0ab, 0x001), /* Set Function Selection, reset default: 0x01 */
UCG_C11(0x0b1, 0x032), /* Set Phase Length, reset default: 0x82 */
UCG_C13(0x0b4, 0xa0,0xb5,0x55), /* Set Segment Low Voltage, reset default: 0xa2 0xb5 0x55 */
//UCG_C11(0x0bb, 0x017), /* Set Precharge Voltage, reset default: 0x17 */
//UCG_C11(0x0be, 0x005), /* Set VComH Voltage, reset default: 0x05 */
UCG_C13(0x0c1, 0xc8, 0x80, 0xc8), /* Set Contrast */
UCG_C11(0x0c7, 0x00f), /* Set Master Contrast (0..15), reset default: 0x05 */
UCG_C11(0x0b6, 0x001), /* Set Second Precharge Period */
// UCG_C10(0x0b8), /* Set CMD Grayscale Lookup, 63 Bytes follow */
// UCG_A8(0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c),
// UCG_A8(0x0D,0x0E,0x0F,0x10,0x11,0x12,0x13,0x14),
// UCG_A8(0x15,0x16,0x18,0x1a,0x1b,0x1C,0x1D,0x1F),
// UCG_A8(0x21,0x23,0x25,0x27,0x2A,0x2D,0x30,0x33),
// UCG_A8(0x36,0x39,0x3C,0x3F,0x42,0x45,0x48,0x4C),
// UCG_A8(0x50,0x54,0x58,0x5C,0x60,0x64,0x68,0x6C),
// UCG_A8(0x70,0x74,0x78,0x7D,0x82,0x87,0x8C,0x91),
// UCG_A7(0x96,0x9B,0xA0,0xA5,0xAA,0xAF,0xB4),
UCG_C10(0x05c), /* Write RAM */
UCG_CS(1), /* disable chip */
UCG_END(), /* end of sequence */
};
ucg_int_t ucg_dev_ssd1351_18x128x128_ilsoft(ucg_t *ucg, ucg_int_t msg, void *data)
{
switch(msg)
{
case UCG_MSG_DEV_POWER_UP:
/* 1. Call to the controller procedures to setup the com interface */
if ( ucg_dev_ic_ssd1351_18(ucg, msg, data) == 0 )
return 0;
/* 2. Send specific init sequence for this display module */
ucg_com_SendCmdSeq(ucg, ucg_ilsoft_ssd1351_init_seq);
return 1;
case UCG_MSG_DEV_POWER_DOWN:
/* let do power down by the conroller procedures */
return ucg_dev_ic_ssd1351_18(ucg, msg, data);
case UCG_MSG_GET_DIMENSION:
((ucg_wh_t *)data)->w = 128;
((ucg_wh_t *)data)->h = 128;
return 1;
}
/* all other messages are handled by the controller procedures */
return ucg_dev_ic_ssd1351_18(ucg, msg, data);
}
| {
"pile_set_name": "Github"
} |
/* Flot plugin for thresholding data.
Copyright (c) 2007-2013 IOLA and Ole Laursen.
Licensed under the MIT license.
The plugin supports these options:
series: {
threshold: {
below: number
color: colorspec
}
}
It can also be applied to a single series, like this:
$.plot( $("#placeholder"), [{
data: [ ... ],
threshold: { ... }
}])
An array can be passed for multiple thresholding, like this:
threshold: [{
below: number1
color: color1
},{
below: number2
color: color2
}]
These multiple threshold objects can be passed in any order since they are
sorted by the processing function.
The data points below "below" are drawn with the specified color. This makes
it easy to mark points below 0, e.g. for budget data.
Internally, the plugin works by splitting the data into two series, above and
below the threshold. The extra series below the threshold will have its label
cleared and the special "originSeries" attribute set to the original series.
You may need to check for this in hover events.
*/
(function ($) {
var options = {
series: { threshold: null } // or { below: number, color: color spec}
};
function init(plot) {
function thresholdData(plot, s, datapoints, below, color) {
var ps = datapoints.pointsize, i, x, y, p, prevp,
thresholded = $.extend({}, s); // note: shallow copy
thresholded.datapoints = { points: [], pointsize: ps, format: datapoints.format };
thresholded.label = null;
thresholded.color = color;
thresholded.threshold = null;
thresholded.originSeries = s;
thresholded.data = [];
var origpoints = datapoints.points,
addCrossingPoints = s.lines.show;
var threspoints = [];
var newpoints = [];
var m;
for (i = 0; i < origpoints.length; i += ps) {
x = origpoints[i];
y = origpoints[i + 1];
prevp = p;
if (y < below)
p = threspoints;
else
p = newpoints;
if (addCrossingPoints && prevp != p && x != null
&& i > 0 && origpoints[i - ps] != null) {
var interx = x + (below - y) * (x - origpoints[i - ps]) / (y - origpoints[i - ps + 1]);
prevp.push(interx);
prevp.push(below);
for (m = 2; m < ps; ++m)
prevp.push(origpoints[i + m]);
p.push(null); // start new segment
p.push(null);
for (m = 2; m < ps; ++m)
p.push(origpoints[i + m]);
p.push(interx);
p.push(below);
for (m = 2; m < ps; ++m)
p.push(origpoints[i + m]);
}
p.push(x);
p.push(y);
for (m = 2; m < ps; ++m)
p.push(origpoints[i + m]);
}
datapoints.points = newpoints;
thresholded.datapoints.points = threspoints;
if (thresholded.datapoints.points.length > 0) {
var origIndex = $.inArray(s, plot.getData());
// Insert newly-generated series right after original one (to prevent it from becoming top-most)
plot.getData().splice(origIndex + 1, 0, thresholded);
}
// FIXME: there are probably some edge cases left in bars
}
function processThresholds(plot, s, datapoints) {
if (!s.threshold)
return;
if (s.threshold instanceof Array) {
s.threshold.sort(function(a, b) {
return a.below - b.below;
});
$(s.threshold).each(function(i, th) {
thresholdData(plot, s, datapoints, th.below, th.color);
});
}
else {
thresholdData(plot, s, datapoints, s.threshold.below, s.threshold.color);
}
}
plot.hooks.processDatapoints.push(processThresholds);
}
$.plot.plugins.push({
init: init,
options: options,
name: 'threshold',
version: '1.2'
});
})(jQuery);
| {
"pile_set_name": "Github"
} |
//
// Wells
// --------------------------------------------------
// Base class
.well {
min-height: 20px;
padding: 19px;
margin-bottom: 20px;
background-color: @well-bg;
border: 1px solid @well-border;
border-radius: @border-radius-base;
.box-shadow(inset 0 1px 1px rgba(0,0,0,.05));
blockquote {
border-color: #ddd;
border-color: rgba(0,0,0,.15);
}
}
// Sizes
.well-lg {
padding: 24px;
border-radius: @border-radius-large;
}
.well-sm {
padding: 9px;
border-radius: @border-radius-small;
}
| {
"pile_set_name": "Github"
} |
/****************************************************************************
*
* Copyright 2018 Samsung Electronics All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
* either express or implied. See the License for the specific
* language governing permissions and limitations under the License.
*
****************************************************************************/
//===----------------------------------------------------------------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
#include "test_macros.h"
#include "libcxx_tc_common.h"
int tc_libcxx_iterators_reverse_iter_conv_tested_elsewhere(void)
{
TC_SUCCESS_RESULT();
return 0;
}
| {
"pile_set_name": "Github"
} |
"""
Django settings for widget_tweak_example project.
Generated by 'django-admin startproject' using Django 3.0.3.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '@r_@99yq-6s%_t)sch4sh=v@pu)!$wnr5avw_o5^%5r5u=*js*'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'example',
'widget_tweaks',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'widget_tweak_example.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'widget_tweak_example.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
| {
"pile_set_name": "Github"
} |
/**
******************************************************************************
* @file stm32f4xx_ll_rng.h
* @author MCD Application Team
* @brief Header file of RNG LL module.
******************************************************************************
* @attention
*
* <h2><center>© COPYRIGHT(c) 2017 STMicroelectronics</center></h2>
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* 3. Neither the name of STMicroelectronics nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
******************************************************************************
*/
/* Define to prevent recursive inclusion -------------------------------------*/
#ifndef __STM32F4xx_LL_RNG_H
#define __STM32F4xx_LL_RNG_H
#ifdef __cplusplus
extern "C" {
#endif
/* Includes ------------------------------------------------------------------*/
#include "stm32f4xx.h"
/** @addtogroup STM32F4xx_LL_Driver
* @{
*/
#if defined(RNG)
/** @defgroup RNG_LL RNG
* @{
*/
/* Private types -------------------------------------------------------------*/
/* Private variables ---------------------------------------------------------*/
/* Private constants ---------------------------------------------------------*/
/* Private macros ------------------------------------------------------------*/
/* Exported types ------------------------------------------------------------*/
/* Exported constants --------------------------------------------------------*/
/** @defgroup RNG_LL_Exported_Constants RNG Exported Constants
* @{
*/
/** @defgroup RNG_LL_EC_GET_FLAG Get Flags Defines
* @brief Flags defines which can be used with LL_RNG_ReadReg function
* @{
*/
#define LL_RNG_SR_DRDY RNG_SR_DRDY /*!< Register contains valid random data */
#define LL_RNG_SR_CECS RNG_SR_CECS /*!< Clock error current status */
#define LL_RNG_SR_SECS RNG_SR_SECS /*!< Seed error current status */
#define LL_RNG_SR_CEIS RNG_SR_CEIS /*!< Clock error interrupt status */
#define LL_RNG_SR_SEIS RNG_SR_SEIS /*!< Seed error interrupt status */
/**
* @}
*/
/** @defgroup RNG_LL_EC_IT IT Defines
* @brief IT defines which can be used with LL_RNG_ReadReg and LL_RNG_WriteReg macros
* @{
*/
#define LL_RNG_CR_IE RNG_CR_IE /*!< RNG Interrupt enable */
/**
* @}
*/
/**
* @}
*/
/* Exported macro ------------------------------------------------------------*/
/** @defgroup RNG_LL_Exported_Macros RNG Exported Macros
* @{
*/
/** @defgroup RNG_LL_EM_WRITE_READ Common Write and read registers Macros
* @{
*/
/**
* @brief Write a value in RNG register
* @param __INSTANCE__ RNG Instance
* @param __REG__ Register to be written
* @param __VALUE__ Value to be written in the register
* @retval None
*/
#define LL_RNG_WriteReg(__INSTANCE__, __REG__, __VALUE__) WRITE_REG(__INSTANCE__->__REG__, (__VALUE__))
/**
* @brief Read a value in RNG register
* @param __INSTANCE__ RNG Instance
* @param __REG__ Register to be read
* @retval Register value
*/
#define LL_RNG_ReadReg(__INSTANCE__, __REG__) READ_REG(__INSTANCE__->__REG__)
/**
* @}
*/
/**
* @}
*/
/* Exported functions --------------------------------------------------------*/
/** @defgroup RNG_LL_Exported_Functions RNG Exported Functions
* @{
*/
/** @defgroup RNG_LL_EF_Configuration RNG Configuration functions
* @{
*/
/**
* @brief Enable Random Number Generation
* @rmtoll CR RNGEN LL_RNG_Enable
* @param RNGx RNG Instance
* @retval None
*/
__STATIC_INLINE void LL_RNG_Enable(RNG_TypeDef *RNGx)
{
SET_BIT(RNGx->CR, RNG_CR_RNGEN);
}
/**
* @brief Disable Random Number Generation
* @rmtoll CR RNGEN LL_RNG_Disable
* @param RNGx RNG Instance
* @retval None
*/
__STATIC_INLINE void LL_RNG_Disable(RNG_TypeDef *RNGx)
{
CLEAR_BIT(RNGx->CR, RNG_CR_RNGEN);
}
/**
* @brief Check if Random Number Generator is enabled
* @rmtoll CR RNGEN LL_RNG_IsEnabled
* @param RNGx RNG Instance
* @retval State of bit (1 or 0).
*/
__STATIC_INLINE uint32_t LL_RNG_IsEnabled(RNG_TypeDef *RNGx)
{
return (READ_BIT(RNGx->CR, RNG_CR_RNGEN) == (RNG_CR_RNGEN));
}
/**
* @}
*/
/** @defgroup RNG_LL_EF_FLAG_Management FLAG Management
* @{
*/
/**
* @brief Indicate if the RNG Data ready Flag is set or not
* @rmtoll SR DRDY LL_RNG_IsActiveFlag_DRDY
* @param RNGx RNG Instance
* @retval State of bit (1 or 0).
*/
__STATIC_INLINE uint32_t LL_RNG_IsActiveFlag_DRDY(RNG_TypeDef *RNGx)
{
return (READ_BIT(RNGx->SR, RNG_SR_DRDY) == (RNG_SR_DRDY));
}
/**
* @brief Indicate if the Clock Error Current Status Flag is set or not
* @rmtoll SR CECS LL_RNG_IsActiveFlag_CECS
* @param RNGx RNG Instance
* @retval State of bit (1 or 0).
*/
__STATIC_INLINE uint32_t LL_RNG_IsActiveFlag_CECS(RNG_TypeDef *RNGx)
{
return (READ_BIT(RNGx->SR, RNG_SR_CECS) == (RNG_SR_CECS));
}
/**
* @brief Indicate if the Seed Error Current Status Flag is set or not
* @rmtoll SR SECS LL_RNG_IsActiveFlag_SECS
* @param RNGx RNG Instance
* @retval State of bit (1 or 0).
*/
__STATIC_INLINE uint32_t LL_RNG_IsActiveFlag_SECS(RNG_TypeDef *RNGx)
{
return (READ_BIT(RNGx->SR, RNG_SR_SECS) == (RNG_SR_SECS));
}
/**
* @brief Indicate if the Clock Error Interrupt Status Flag is set or not
* @rmtoll SR CEIS LL_RNG_IsActiveFlag_CEIS
* @param RNGx RNG Instance
* @retval State of bit (1 or 0).
*/
__STATIC_INLINE uint32_t LL_RNG_IsActiveFlag_CEIS(RNG_TypeDef *RNGx)
{
return (READ_BIT(RNGx->SR, RNG_SR_CEIS) == (RNG_SR_CEIS));
}
/**
* @brief Indicate if the Seed Error Interrupt Status Flag is set or not
* @rmtoll SR SEIS LL_RNG_IsActiveFlag_SEIS
* @param RNGx RNG Instance
* @retval State of bit (1 or 0).
*/
__STATIC_INLINE uint32_t LL_RNG_IsActiveFlag_SEIS(RNG_TypeDef *RNGx)
{
return (READ_BIT(RNGx->SR, RNG_SR_SEIS) == (RNG_SR_SEIS));
}
/**
* @brief Clear Clock Error interrupt Status (CEIS) Flag
* @rmtoll SR CEIS LL_RNG_ClearFlag_CEIS
* @param RNGx RNG Instance
* @retval None
*/
__STATIC_INLINE void LL_RNG_ClearFlag_CEIS(RNG_TypeDef *RNGx)
{
WRITE_REG(RNGx->SR, ~RNG_SR_CEIS);
}
/**
* @brief Clear Seed Error interrupt Status (SEIS) Flag
* @rmtoll SR SEIS LL_RNG_ClearFlag_SEIS
* @param RNGx RNG Instance
* @retval None
*/
__STATIC_INLINE void LL_RNG_ClearFlag_SEIS(RNG_TypeDef *RNGx)
{
WRITE_REG(RNGx->SR, ~RNG_SR_SEIS);
}
/**
* @}
*/
/** @defgroup RNG_LL_EF_IT_Management IT Management
* @{
*/
/**
* @brief Enable Random Number Generator Interrupt
* (applies for either Seed error, Clock Error or Data ready interrupts)
* @rmtoll CR IE LL_RNG_EnableIT
* @param RNGx RNG Instance
* @retval None
*/
__STATIC_INLINE void LL_RNG_EnableIT(RNG_TypeDef *RNGx)
{
SET_BIT(RNGx->CR, RNG_CR_IE);
}
/**
* @brief Disable Random Number Generator Interrupt
* (applies for either Seed error, Clock Error or Data ready interrupts)
* @rmtoll CR IE LL_RNG_DisableIT
* @param RNGx RNG Instance
* @retval None
*/
__STATIC_INLINE void LL_RNG_DisableIT(RNG_TypeDef *RNGx)
{
CLEAR_BIT(RNGx->CR, RNG_CR_IE);
}
/**
* @brief Check if Random Number Generator Interrupt is enabled
* (applies for either Seed error, Clock Error or Data ready interrupts)
* @rmtoll CR IE LL_RNG_IsEnabledIT
* @param RNGx RNG Instance
* @retval State of bit (1 or 0).
*/
__STATIC_INLINE uint32_t LL_RNG_IsEnabledIT(RNG_TypeDef *RNGx)
{
return (READ_BIT(RNGx->CR, RNG_CR_IE) == (RNG_CR_IE));
}
/**
* @}
*/
/** @defgroup RNG_LL_EF_Data_Management Data Management
* @{
*/
/**
* @brief Return32-bit Random Number value
* @rmtoll DR RNDATA LL_RNG_ReadRandData32
* @param RNGx RNG Instance
* @retval Generated 32-bit random value
*/
__STATIC_INLINE uint32_t LL_RNG_ReadRandData32(RNG_TypeDef *RNGx)
{
return (uint32_t)(READ_REG(RNGx->DR));
}
/**
* @}
*/
#if defined(USE_FULL_LL_DRIVER)
/** @defgroup RNG_LL_EF_Init Initialization and de-initialization functions
* @{
*/
ErrorStatus LL_RNG_DeInit(RNG_TypeDef *RNGx);
/**
* @}
*/
#endif /* USE_FULL_LL_DRIVER */
/**
* @}
*/
/**
* @}
*/
#endif /* defined(RNG) */
/**
* @}
*/
#ifdef __cplusplus
}
#endif
#endif /* __STM32F4xx_LL_RNG_H */
/************************ (C) COPYRIGHT STMicroelectronics *****END OF FILE****/
| {
"pile_set_name": "Github"
} |
using System.Collections.Generic;
using Essensoft.AspNetCore.Payment.Alipay.Response;
namespace Essensoft.AspNetCore.Payment.Alipay.Request
{
/// <summary>
/// ant.merchant.expand.tradeorder.query
/// </summary>
public class AntMerchantExpandTradeorderQueryRequest : IAlipayRequest<AntMerchantExpandTradeorderQueryResponse>
{
/// <summary>
/// 查询订单信息
/// </summary>
public string BizContent { get; set; }
#region IAlipayRequest Members
private bool needEncrypt = false;
private string apiVersion = "1.0";
private string terminalType;
private string terminalInfo;
private string prodCode;
private string notifyUrl;
private string returnUrl;
private AlipayObject bizModel;
public void SetNeedEncrypt(bool needEncrypt)
{
this.needEncrypt = needEncrypt;
}
public bool GetNeedEncrypt()
{
return needEncrypt;
}
public void SetNotifyUrl(string notifyUrl)
{
this.notifyUrl = notifyUrl;
}
public string GetNotifyUrl()
{
return notifyUrl;
}
public void SetReturnUrl(string returnUrl)
{
this.returnUrl = returnUrl;
}
public string GetReturnUrl()
{
return returnUrl;
}
public void SetTerminalType(string terminalType)
{
this.terminalType = terminalType;
}
public string GetTerminalType()
{
return terminalType;
}
public void SetTerminalInfo(string terminalInfo)
{
this.terminalInfo = terminalInfo;
}
public string GetTerminalInfo()
{
return terminalInfo;
}
public void SetProdCode(string prodCode)
{
this.prodCode = prodCode;
}
public string GetProdCode()
{
return prodCode;
}
public string GetApiName()
{
return "ant.merchant.expand.tradeorder.query";
}
public void SetApiVersion(string apiVersion)
{
this.apiVersion = apiVersion;
}
public string GetApiVersion()
{
return apiVersion;
}
public IDictionary<string, string> GetParameters()
{
var parameters = new AlipayDictionary
{
{ "biz_content", BizContent }
};
return parameters;
}
public AlipayObject GetBizModel()
{
return bizModel;
}
public void SetBizModel(AlipayObject bizModel)
{
this.bizModel = bizModel;
}
#endregion
}
}
| {
"pile_set_name": "Github"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.