text stringlengths 2 99k | meta dict |
|---|---|
.TH std::flush_emit 3 "2019.08.27" "http://cppreference.com" "C++ Standard Libary"
.SH NAME
std::flush_emit \- std::flush_emit
.SH Synopsis
Defined in header <ostream>
template< class CharT, class Traits >
std::basic_ostream<CharT, Traits>& flush_emit( \fI(since C++20)\fP
std::basic_ostream<CharT, Traits>& os );
Flushes the output sequence os as if by calling os.flush(). Then, if os.rdbuf()
actually points to a std::basic_syncbuf<CharT, Traits, Allocator> buf, calls
buf.emit().
This is an output-only I/O manipulator, it may be called with an expression such as
out << std::flush_emit for any out of type std::basic_ostream.
.SH Parameters
os - reference to output stream
.SH Return value
os (reference to the stream after manipulation)
.SH Example
This section is incomplete
Reason: no example
.SH See also
flush synchronizes with the underlying storage device
\fI(public member function of std::basic_ostream<CharT,Traits>)\fP
| {
"pile_set_name": "Github"
} |
/*********************************************************************
* SEGGER Microcontroller GmbH & Co. KG *
* Solutions for real time microcontroller applications *
**********************************************************************
* *
* (c) 1996 - 2015 SEGGER Microcontroller GmbH & Co. KG *
* *
* Internet: www.segger.com Support: support@segger.com *
* *
**********************************************************************
** emWin V5.28 - Graphical user interface for embedded applications **
All Intellectual Property rights in the Software belongs to SEGGER.
emWin is protected by international copyright laws. Knowledge of the
source code may not be used to write a similar product. This file may
only be used in accordance with the following terms:
The software has been licensed to STMicroelectronics International
N.V. a Dutch company with a Swiss branch and its headquarters in Plan-
les-Ouates, Geneva, 39 Chemin du Champ des Filles, Switzerland for the
purposes of creating libraries for ARM Cortex-M-based 32-bit microcon_
troller products commercialized by Licensee only, sublicensed and dis_
tributed under the terms and conditions of the End User License Agree_
ment supplied by STMicroelectronics International N.V.
Full source code is available at: www.segger.com
We appreciate your understanding and fairness.
----------------------------------------------------------------------
File : GUIDRV_Template.h
Purpose : Interface definition for GUIDRV_Template driver
---------------------------END-OF-HEADER------------------------------
*/
#ifndef GUIDRV_TEMPLATE_H
#define GUIDRV_TEMPLATE_H
/*********************************************************************
*
* Display drivers
*/
//
// Addresses
//
extern const GUI_DEVICE_API GUIDRV_Win_API;
extern const GUI_DEVICE_API GUIDRV_Template_API;
//
// Macros to be used in configuration files
//
#if defined(WIN32) && !defined(LCD_SIMCONTROLLER)
#define GUIDRV_TEMPLATE &GUIDRV_Win_API
#else
#define GUIDRV_TEMPLATE &GUIDRV_Template_API
#endif
#endif
/*************************** End of file ****************************/
| {
"pile_set_name": "Github"
} |
require_relative '../../spec_helper'
require_relative '../../shared/file/directory'
describe "File.directory?" do
it_behaves_like :file_directory, :directory?, File
end
describe "File.directory?" do
it_behaves_like :file_directory_io, :directory?, File
end
| {
"pile_set_name": "Github"
} |
<!-- THIS IS AUTO-GENERATED CONTENT. DO NOT MANUALLY EDIT. -->
This image is part of the [balena.io][balena] base image series for IoT devices. The image is optimized for use with [balena.io][balena] and [balenaOS][balena-os], but can be used in any Docker environment running on the appropriate architecture.
.
Some notable features in `balenalib` base images:
- Helpful package installer script called `install_packages` that abstracts away the specifics of the underlying package managers. It will install the named packages with smallest number of dependencies (ignore optional dependencies), clean up the package manager medata and retry if package install fails.
- Working with dynamically plugged devices: each `balenalib` base image has a default `ENTRYPOINT` which is defined as `ENTRYPOINT ["/usr/bin/entry.sh"]`, it checks if the `UDEV` flag is set to true or not (by adding `ENV UDEV=1`) and if true, it will start `udevd` daemon and the relevant device nodes in the container /dev will appear.
For more details, please check the [features overview](https://www.balena.io/docs/reference/base-images/base-images/#features-overview) in our documentation.
# [Image Variants][variants]
The `balenalib` images come in many flavors, each designed for a specific use case.
## `:<version>` or `:<version>-run`
This is the defacto image. The `run` variant is designed to be a slim and minimal variant with only runtime essentials packaged into it.
## `:<version>-build`
The build variant is a heavier image that includes many of the tools required for building from source. This reduces the number of packages that you will need to manually install in your Dockerfile, thus reducing the overall size of all images on your system.
[variants]: https://www.balena.io/docs/reference/base-images/base-images/#run-vs-build?ref=dockerhub
# How to use this image with Balena
This [guide][getting-started] can help you get started with using this base image with balena, there are also some cool [example projects][example-projects] that will give you an idea what can be done with balena.
# What is Python?
Python is an interpreted, interactive, object-oriented, open-source programming language. It incorporates modules, exceptions, dynamic typing, very high level dynamic data types, and classes. Python combines remarkable power with very clear syntax. It has interfaces to many system calls and libraries, as well as to various window systems, and is extensible in C or C++. It is also usable as an extension language for applications that need a programmable interface. Finally, Python is portable: it runs on many Unix variants, on the Mac, and on Windows 2000 and later.
> [wikipedia.org/wiki/Python_(programming_language)](https://en.wikipedia.org/wiki/Python_%28programming_language%29)
.
# Supported versions and respective `Dockerfile` links :
[ `3.8.5 (latest)`, `2.7.18`, `3.7.8`, `3.6.11`, `3.5.7`](https://github.com/balena-io-library/base-images/tree/master/balena-base-images/python/colibri-imx6dl/debian/)
For more information about this image and its history, please see the [relevant manifest file (`colibri-imx6dl-debian-python`)](https://github.com/balena-io-library/official-images/blob/master/library/colibri-imx6dl-debian-python) in the [`balena-io-library/official-images` GitHub repo](https://github.com/balena-io-library/official-images).
# How to use this image
## Create a `Dockerfile` in your Python app project
```dockerfile
FROM balenalib/colibri-imx6dl-debian-python:latest
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
```
You can then build and run the Docker image:
```console
$ docker build -t my-python-app .
$ docker run -it --rm --name my-running-app my-python-app
```
## Run a single Python script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp balenalib/colibri-imx6dl-debian-python:latest python your-daemon-or-script.py
```
[example-projects]: https://www.balena.io/docs/learn/getting-started/colibri-imx6dl/python/#example-projects?ref=dockerhub
[getting-started]: https://www.balena.io/docs/learn/getting-started/colibri-imx6dl/python/?ref=dockerhub
# User Feedback
## Issues
If you have any problems with or questions about this image, please contact us through a [GitHub issue](https://github.com/balena-io-library/base-images/issues).
## Contributing
You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.
Before you start to code, we recommend discussing your plans through a [GitHub issue](https://github.com/balena-io-library/base-images/issues), especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your design, and help you find out if someone else is working on the same thing.
## Documentation
Documentation for this image is stored in the [base images documentation][docs]. Check it out for list of all of our base images including many specialised ones for e.g. node, python, go, smaller images, etc.
You can also find more details about new features in `balenalib` base images in this [blog post][migration-docs]
[docs]: https://www.balena.io/docs/reference/base-images/base-images/#balena-base-images?ref=dockerhub
[variants]: https://www.balena.io/docs/reference/base-images/base-images/#run-vs-build?ref=dockerhub
[migration-docs]: https://www.balena.io/blog/new-year-new-balena-base-images/?ref=dockerhub
[balena]: https://balena.io/?ref=dockerhub
[balena-os]: https://www.balena.io/os/?ref=dockerhub | {
"pile_set_name": "Github"
} |
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.apache.maven.plugins.maven-javadoc-plugin.unit</groupId>
<artifactId>docfiles-test</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<inceptionYear>2006</inceptionYear>
<name>Maven Javadoc Plugin Docfiles Test</name>
<url>http://maven.apache.org</url>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<configuration>
<project implementation="org.apache.maven.plugins.javadoc.stubs.DocfilesTestMavenProjectStub"/>
<localRepository>${localRepository}</localRepository>
<outputDirectory>${basedir}/target/test/unit/docfiles-test/target/site/apidocs</outputDirectory>
<javadocOptionsDir>${basedir}/target/test/unit/docfiles-test/target/javadoc-bundle-options</javadocOptionsDir>
<breakiterator>false</breakiterator>
<old>false</old>
<show>protected</show>
<quiet>true</quiet>
<verbose>false</verbose>
<author>true</author>
<encoding>ISO-8859-1</encoding>
<linksource>false</linksource>
<nocomment>false</nocomment>
<nodeprecated>false</nodeprecated>
<nodeprecatedlist>false</nodeprecatedlist>
<nohelp>false</nohelp>
<noindex>false</noindex>
<nonavbar>false</nonavbar>
<nosince>false</nosince>
<notree>false</notree>
<serialwarn>false</serialwarn>
<splitindex>false</splitindex>
<stylesheet>java</stylesheet>
<groups/>
<tags/>
<use>true</use>
<version>true</version>
<windowtitle>Maven Javadoc Plugin Docfiles Test 1.0-SNAPSHOT API</windowtitle>
<docfilessubdirs>true</docfilessubdirs>
<excludedocfilessubdir>excluded-dir1:excluded-dir2</excludedocfilessubdir>
<debug>true</debug>
<failOnError>true</failOnError>
</configuration>
</plugin>
</plugins>
</build>
</project>
| {
"pile_set_name": "Github"
} |
/* Generated by RuntimeBrowser
Image: /System/Library/PrivateFrameworks/OfficeImport.framework/OfficeImport
*/
@interface WMTableStyle : WMStyle {
WDTableProperties * mWdTableProperties;
}
- (void)addTableProperties:(id)arg1;
- (id)initWithWDTableProperties:(id)arg1;
@end
| {
"pile_set_name": "Github"
} |
<?php
/**
* This file is part of PHPPresentation - A pure PHP library for reading and writing
* presentations documents.
*
* PHPPresentation is free software distributed under the terms of the GNU Lesser
* General Public License version 3 as published by the Free Software Foundation.
*
* For the full copyright and license information, please read the LICENSE
* file that was distributed with this source code. For the full list of
* contributors, visit https://github.com/PHPOffice/PHPPresentation/contributors.
*
* @copyright 2009-2015 PHPPresentation contributors
* @license http://www.gnu.org/licenses/lgpl.txt LGPL version 3
* @link https://github.com/PHPOffice/PHPPresentation
*/
namespace PhpOffice\PhpPresentation\Tests\Shape\Chart;
use PhpOffice\PhpPresentation\Shape\Chart\Legend;
use PhpOffice\PhpPresentation\Style\Alignment;
use PhpOffice\PhpPresentation\Style\Border;
use PhpOffice\PhpPresentation\Style\Fill;
use PhpOffice\PhpPresentation\Style\Font;
use PHPUnit\Framework\TestCase;
/**
* Test class for Legend element
*
* @coversDefaultClass PhpOffice\PhpPresentation\Shape\Chart\Legend
*/
class LegendTest extends TestCase
{
public function testConstruct()
{
$object = new Legend();
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Font', $object->getFont());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Border', $object->getBorder());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Fill', $object->getFill());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Alignment', $object->getAlignment());
}
public function testAlignment()
{
$object = new Legend();
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setAlignment(new Alignment()));
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Alignment', $object->getAlignment());
}
public function testBorder()
{
$object = new Legend();
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Border', $object->getBorder());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setBorder(new Border()));
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Border', $object->getBorder());
}
public function testFill()
{
$object = new Legend();
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Fill', $object->getFill());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setFill(new Fill()));
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Fill', $object->getFill());
}
public function testFont()
{
$object = new Legend();
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setFont());
$this->assertNull($object->getFont());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setFont(new Font()));
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Style\\Font', $object->getFont());
}
public function testHashIndex()
{
$object = new Legend();
$value = mt_rand(1, 100);
$this->assertEmpty($object->getHashIndex());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setHashIndex($value));
$this->assertEquals($value, $object->getHashIndex());
}
public function testHeight()
{
$object = new Legend();
$value = mt_rand(0, 100);
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setHeight());
$this->assertEquals(0, $object->getHeight());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setHeight($value));
$this->assertEquals($value, $object->getHeight());
}
public function testOffsetX()
{
$object = new Legend();
$value = mt_rand(0, 100);
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setOffsetX());
$this->assertEquals(0, $object->getOffsetX());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setOffsetX($value));
$this->assertEquals($value, $object->getOffsetX());
}
public function testOffsetY()
{
$object = new Legend();
$value = mt_rand(0, 100);
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setOffsetY());
$this->assertEquals(0, $object->getOffsetY());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setOffsetY($value));
$this->assertEquals($value, $object->getOffsetY());
}
public function testPosition()
{
$object = new Legend();
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setPosition());
$this->assertEquals(Legend::POSITION_RIGHT, $object->getPosition());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setPosition(Legend::POSITION_BOTTOM));
$this->assertEquals(Legend::POSITION_BOTTOM, $object->getPosition());
}
public function testVisible()
{
$object = new Legend();
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setVisible());
$this->assertTrue($object->isVisible());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setVisible(true));
$this->assertTrue($object->isVisible());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setVisible(false));
$this->assertFalse($object->isVisible());
}
public function testWidth()
{
$object = new Legend();
$value = mt_rand(0, 100);
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setWidth());
$this->assertEquals(0, $object->getWidth());
$this->assertInstanceOf('PhpOffice\\PhpPresentation\\Shape\\Chart\\Legend', $object->setWidth($value));
$this->assertEquals($value, $object->getWidth());
}
}
| {
"pile_set_name": "Github"
} |
package semver
import (
"reflect"
"strings"
"testing"
)
type comparatorTest struct {
input string
comparator func(comparator) bool
}
func TestParseComparator(t *testing.T) {
compatorTests := []comparatorTest{
{">", testGT},
{">=", testGE},
{"<", testLT},
{"<=", testLE},
{"", testEQ},
{"=", testEQ},
{"==", testEQ},
{"!=", testNE},
{"!", testNE},
{"-", nil},
{"<==", nil},
{"<<", nil},
{">>", nil},
}
for _, tc := range compatorTests {
if c := parseComparator(tc.input); c == nil {
if tc.comparator != nil {
t.Errorf("Comparator nil for case %q\n", tc.input)
}
} else if !tc.comparator(c) {
t.Errorf("Invalid comparator for case %q\n", tc.input)
}
}
}
var (
v1 = MustParse("1.2.2")
v2 = MustParse("1.2.3")
v3 = MustParse("1.2.4")
)
func testEQ(f comparator) bool {
return f(v1, v1) && !f(v1, v2)
}
func testNE(f comparator) bool {
return !f(v1, v1) && f(v1, v2)
}
func testGT(f comparator) bool {
return f(v2, v1) && f(v3, v2) && !f(v1, v2) && !f(v1, v1)
}
func testGE(f comparator) bool {
return f(v2, v1) && f(v3, v2) && !f(v1, v2)
}
func testLT(f comparator) bool {
return f(v1, v2) && f(v2, v3) && !f(v2, v1) && !f(v1, v1)
}
func testLE(f comparator) bool {
return f(v1, v2) && f(v2, v3) && !f(v2, v1)
}
func TestSplitAndTrim(t *testing.T) {
tests := []struct {
i string
s []string
}{
{"1.2.3 1.2.3", []string{"1.2.3", "1.2.3"}},
{" 1.2.3 1.2.3 ", []string{"1.2.3", "1.2.3"}}, // Spaces
{"1.2.3 || >=1.2.3 <1.2.3", []string{"1.2.3", "||", ">=1.2.3", "<1.2.3"}},
{" 1.2.3 || >=1.2.3 <1.2.3 ", []string{"1.2.3", "||", ">=1.2.3", "<1.2.3"}},
}
for _, tc := range tests {
p := splitAndTrim(tc.i)
if !reflect.DeepEqual(p, tc.s) {
t.Errorf("Invalid for case %q: Expected %q, got: %q", tc.i, tc.s, p)
}
}
}
func TestSplitComparatorVersion(t *testing.T) {
tests := []struct {
i string
p []string
}{
{">1.2.3", []string{">", "1.2.3"}},
{">=1.2.3", []string{">=", "1.2.3"}},
{"<1.2.3", []string{"<", "1.2.3"}},
{"<=1.2.3", []string{"<=", "1.2.3"}},
{"1.2.3", []string{"", "1.2.3"}},
{"=1.2.3", []string{"=", "1.2.3"}},
{"==1.2.3", []string{"==", "1.2.3"}},
{"!=1.2.3", []string{"!=", "1.2.3"}},
{"!1.2.3", []string{"!", "1.2.3"}},
{"error", nil},
}
for _, tc := range tests {
if op, v, err := splitComparatorVersion(tc.i); err != nil {
if tc.p != nil {
t.Errorf("Invalid for case %q: Expected %q, got error %q", tc.i, tc.p, err)
}
} else if op != tc.p[0] {
t.Errorf("Invalid operator for case %q: Expected %q, got: %q", tc.i, tc.p[0], op)
} else if v != tc.p[1] {
t.Errorf("Invalid version for case %q: Expected %q, got: %q", tc.i, tc.p[1], v)
}
}
}
func TestBuildVersionRange(t *testing.T) {
tests := []struct {
opStr string
vStr string
c func(comparator) bool
v string
}{
{">", "1.2.3", testGT, "1.2.3"},
{">=", "1.2.3", testGE, "1.2.3"},
{"<", "1.2.3", testLT, "1.2.3"},
{"<=", "1.2.3", testLE, "1.2.3"},
{"", "1.2.3", testEQ, "1.2.3"},
{"=", "1.2.3", testEQ, "1.2.3"},
{"==", "1.2.3", testEQ, "1.2.3"},
{"!=", "1.2.3", testNE, "1.2.3"},
{"!", "1.2.3", testNE, "1.2.3"},
{">>", "1.2.3", nil, ""}, // Invalid comparator
{"=", "invalid", nil, ""}, // Invalid version
}
for _, tc := range tests {
if r, err := buildVersionRange(tc.opStr, tc.vStr); err != nil {
if tc.c != nil {
t.Errorf("Invalid for case %q: Expected %q, got error %q", strings.Join([]string{tc.opStr, tc.vStr}, ""), tc.v, err)
}
} else if r == nil {
t.Errorf("Invalid for case %q: got nil", strings.Join([]string{tc.opStr, tc.vStr}, ""))
} else {
// test version
if tv := MustParse(tc.v); !r.v.EQ(tv) {
t.Errorf("Invalid for case %q: Expected version %q, got: %q", strings.Join([]string{tc.opStr, tc.vStr}, ""), tv, r.v)
}
// test comparator
if r.c == nil {
t.Errorf("Invalid for case %q: got nil comparator", strings.Join([]string{tc.opStr, tc.vStr}, ""))
continue
}
if !tc.c(r.c) {
t.Errorf("Invalid comparator for case %q\n", strings.Join([]string{tc.opStr, tc.vStr}, ""))
}
}
}
}
func TestSplitORParts(t *testing.T) {
tests := []struct {
i []string
o [][]string
}{
{[]string{">1.2.3", "||", "<1.2.3", "||", "=1.2.3"}, [][]string{
[]string{">1.2.3"},
[]string{"<1.2.3"},
[]string{"=1.2.3"},
}},
{[]string{">1.2.3", "<1.2.3", "||", "=1.2.3"}, [][]string{
[]string{">1.2.3", "<1.2.3"},
[]string{"=1.2.3"},
}},
{[]string{">1.2.3", "||"}, nil},
{[]string{"||", ">1.2.3"}, nil},
}
for _, tc := range tests {
o, err := splitORParts(tc.i)
if err != nil && tc.o != nil {
t.Errorf("Unexpected error for case %q: %s", tc.i, err)
}
if !reflect.DeepEqual(tc.o, o) {
t.Errorf("Invalid for case %q: Expected %q, got: %q", tc.i, tc.o, o)
}
}
}
func TestVersionRangeToRange(t *testing.T) {
vr := versionRange{
v: MustParse("1.2.3"),
c: compLT,
}
rf := vr.rangeFunc()
if !rf(MustParse("1.2.2")) || rf(MustParse("1.2.3")) {
t.Errorf("Invalid conversion to range func")
}
}
func TestRangeAND(t *testing.T) {
v := MustParse("1.2.2")
v1 := MustParse("1.2.1")
v2 := MustParse("1.2.3")
rf1 := Range(func(v Version) bool {
return v.GT(v1)
})
rf2 := Range(func(v Version) bool {
return v.LT(v2)
})
rf := rf1.AND(rf2)
if rf(v1) {
t.Errorf("Invalid rangefunc, accepted: %s", v1)
}
if rf(v2) {
t.Errorf("Invalid rangefunc, accepted: %s", v2)
}
if !rf(v) {
t.Errorf("Invalid rangefunc, did not accept: %s", v)
}
}
func TestRangeOR(t *testing.T) {
tests := []struct {
v Version
b bool
}{
{MustParse("1.2.0"), true},
{MustParse("1.2.2"), false},
{MustParse("1.2.4"), true},
}
v1 := MustParse("1.2.1")
v2 := MustParse("1.2.3")
rf1 := Range(func(v Version) bool {
return v.LT(v1)
})
rf2 := Range(func(v Version) bool {
return v.GT(v2)
})
rf := rf1.OR(rf2)
for _, tc := range tests {
if r := rf(tc.v); r != tc.b {
t.Errorf("Invalid for case %q: Expected %t, got %t", tc.v, tc.b, r)
}
}
}
func TestParseRange(t *testing.T) {
type tv struct {
v string
b bool
}
tests := []struct {
i string
t []tv
}{
// Simple expressions
{">1.2.3", []tv{
{"1.2.2", false},
{"1.2.3", false},
{"1.2.4", true},
}},
{">=1.2.3", []tv{
{"1.2.3", true},
{"1.2.4", true},
{"1.2.2", false},
}},
{"<1.2.3", []tv{
{"1.2.2", true},
{"1.2.3", false},
{"1.2.4", false},
}},
{"<=1.2.3", []tv{
{"1.2.2", true},
{"1.2.3", true},
{"1.2.4", false},
}},
{"1.2.3", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
}},
{"=1.2.3", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
}},
{"==1.2.3", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
}},
{"!=1.2.3", []tv{
{"1.2.2", true},
{"1.2.3", false},
{"1.2.4", true},
}},
{"!1.2.3", []tv{
{"1.2.2", true},
{"1.2.3", false},
{"1.2.4", true},
}},
// Simple Expression errors
{">>1.2.3", nil},
{"!1.2.3", nil},
{"1.0", nil},
{"string", nil},
{"", nil},
// AND Expressions
{">1.2.2 <1.2.4", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
}},
{"<1.2.2 <1.2.4", []tv{
{"1.2.1", true},
{"1.2.2", false},
{"1.2.3", false},
{"1.2.4", false},
}},
{">1.2.2 <1.2.5 !=1.2.4", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
{"1.2.5", false},
}},
{">1.2.2 <1.2.5 !1.2.4", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
{"1.2.5", false},
}},
// OR Expressions
{">1.2.2 || <1.2.4", []tv{
{"1.2.2", true},
{"1.2.3", true},
{"1.2.4", true},
}},
{"<1.2.2 || >1.2.4", []tv{
{"1.2.2", false},
{"1.2.3", false},
{"1.2.4", false},
}},
// Combined Expressions
{">1.2.2 <1.2.4 || >=2.0.0", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
{"2.0.0", true},
{"2.0.1", true},
}},
{">1.2.2 <1.2.4 || >=2.0.0 <3.0.0", []tv{
{"1.2.2", false},
{"1.2.3", true},
{"1.2.4", false},
{"2.0.0", true},
{"2.0.1", true},
{"2.9.9", true},
{"3.0.0", false},
}},
}
for _, tc := range tests {
r, err := ParseRange(tc.i)
if err != nil && tc.t != nil {
t.Errorf("Error parsing range %q: %s", tc.i, err)
continue
}
for _, tvc := range tc.t {
v := MustParse(tvc.v)
if res := r(v); res != tvc.b {
t.Errorf("Invalid for case %q matching %q: Expected %t, got: %t", tc.i, tvc.v, tvc.b, res)
}
}
}
}
func BenchmarkRangeParseSimple(b *testing.B) {
const VERSION = ">1.0.0"
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
ParseRange(VERSION)
}
}
func BenchmarkRangeParseAverage(b *testing.B) {
const VERSION = ">=1.0.0 <2.0.0"
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
ParseRange(VERSION)
}
}
func BenchmarkRangeParseComplex(b *testing.B) {
const VERSION = ">=1.0.0 <2.0.0 || >=3.0.1 <4.0.0 !=3.0.3 || >=5.0.0"
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
ParseRange(VERSION)
}
}
func BenchmarkRangeMatchSimple(b *testing.B) {
const VERSION = ">1.0.0"
r, _ := ParseRange(VERSION)
v := MustParse("2.0.0")
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
r(v)
}
}
func BenchmarkRangeMatchAverage(b *testing.B) {
const VERSION = ">=1.0.0 <2.0.0"
r, _ := ParseRange(VERSION)
v := MustParse("1.2.3")
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
r(v)
}
}
func BenchmarkRangeMatchComplex(b *testing.B) {
const VERSION = ">=1.0.0 <2.0.0 || >=3.0.1 <4.0.0 !=3.0.3 || >=5.0.0"
r, _ := ParseRange(VERSION)
v := MustParse("5.0.1")
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
r(v)
}
}
| {
"pile_set_name": "Github"
} |
// TEST_CFLAGS -framework Foundation
#import <Foundation/Foundation.h>
#import <Foundation/NSDictionary.h>
#import <objc/runtime.h>
#import <objc/objc-abi.h>
#include "test.h"
@interface TestIndexed : NSObject <NSFastEnumeration> {
NSMutableArray *indexedValues;
}
@property(readonly) NSUInteger count;
- (id)objectAtIndexedSubscript:(NSUInteger)index;
- (void)setObject:(id)object atIndexedSubscript:(NSUInteger)index;
@end
@implementation TestIndexed
- (id)init {
if ((self = [super init])) {
indexedValues = [NSMutableArray new];
}
return self;
}
#if !__has_feature(objc_arc)
- (void)dealloc {
[indexedValues release];
[super dealloc];
}
#endif
- (NSUInteger)count { return [indexedValues count]; }
- (id)objectAtIndexedSubscript:(NSUInteger)index { return [indexedValues objectAtIndex:index]; }
- (void)setObject:(id)object atIndexedSubscript:(NSUInteger)index {
if (index == NSNotFound)
[indexedValues addObject:object];
else
[indexedValues replaceObjectAtIndex:index withObject:object];
}
- (NSString *)description {
return [NSString stringWithFormat:@"indexedValues = %@", indexedValues];
}
- (NSUInteger)countByEnumeratingWithState:(NSFastEnumerationState *)state objects:(id __unsafe_unretained [])buffer count:(NSUInteger)len {
return [indexedValues countByEnumeratingWithState:state objects:buffer count:len];
}
@end
@interface TestKeyed : NSObject <NSFastEnumeration> {
NSMutableDictionary *keyedValues;
}
@property(readonly) NSUInteger count;
- (id)objectForKeyedSubscript:(id)key;
- (void)setObject:(id)object forKeyedSubscript:(id)key;
@end
@implementation TestKeyed
- (id)init {
if ((self = [super init])) {
keyedValues = [NSMutableDictionary new];
}
return self;
}
#if !__has_feature(objc_arc)
- (void)dealloc {
[keyedValues release];
[super dealloc];
}
#endif
- (NSUInteger)count { return [keyedValues count]; }
- (id)objectForKeyedSubscript:(id)key { return [keyedValues objectForKey:key]; }
- (void)setObject:(id)object forKeyedSubscript:(id)key {
[keyedValues setObject:object forKey:key];
}
- (NSString *)description {
return [NSString stringWithFormat:@"keyedValues = %@", keyedValues];
}
- (NSUInteger)countByEnumeratingWithState:(NSFastEnumerationState *)state objects:(id __unsafe_unretained [])buffer count:(NSUInteger)len {
return [keyedValues countByEnumeratingWithState:state objects:buffer count:len];
}
@end
int main() {
PUSH_POOL {
#if __has_feature(objc_bool) // placeholder until we get a more precise macro.
TestIndexed *testIndexed = [TestIndexed new];
id objects[] = { @1, @2, @3, @4, @5 };
size_t i, count = sizeof(objects) / sizeof(id);
for (i = 0; i < count; ++i) {
testIndexed[NSNotFound] = objects[i];
}
for (i = 0; i < count; ++i) {
id object = testIndexed[i];
testassert(object == objects[i]);
}
if (testverbose()) {
i = 0;
for (id object in testIndexed) {
NSString *message = [NSString stringWithFormat:@"testIndexed[%zu] = %@\n", i++, object];
testprintf([message UTF8String]);
}
}
TestKeyed *testKeyed = [TestKeyed new];
id keys[] = { @"One", @"Two", @"Three", @"Four", @"Five" };
for (i = 0; i < count; ++i) {
id key = keys[i];
testKeyed[key] = objects[i];
}
for (i = 0; i < count; ++i) {
id key = keys[i];
id object = testKeyed[key];
testassert(object == objects[i]);
}
if (testverbose()) {
for (id key in testKeyed) {
NSString *message = [NSString stringWithFormat:@"testKeyed[@\"%@\"] = %@\n", key, testKeyed[key]];
testprintf([message UTF8String]);
}
}
#endif
} POP_POOL;
succeed(__FILE__);
return 0;
}
| {
"pile_set_name": "Github"
} |
//Microsoft Developer Studio generated resource script.
//
#include "jaccesswalkerResource.h"
#define APSTUDIO_READONLY_SYMBOLS
/////////////////////////////////////////////////////////////////////////////
//
// Generated from the TEXTINCLUDE 2 resource.
//
#define APSTUDIO_HIDDEN_SYMBOLS
#include "windows.h"
#undef APSTUDIO_HIDDEN_SYMBOLS
/////////////////////////////////////////////////////////////////////////////
#undef APSTUDIO_READONLY_SYMBOLS
/////////////////////////////////////////////////////////////////////////////
// English (U.S.) resources
#if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_ENU)
#ifdef _WIN32
LANGUAGE LANG_ENGLISH, SUBLANG_ENGLISH_US
#pragma code_page(1252)
#endif //_WIN32
/////////////////////////////////////////////////////////////////////////////
//
// Dialog
//
JACCESSWALKERWINDOW DIALOG DISCARDABLE 160, 78, 294, 214
STYLE WS_MINIMIZEBOX | WS_MAXIMIZEBOX | WS_VISIBLE | WS_CAPTION | WS_SYSMENU |
WS_THICKFRAME
CAPTION "jaccesswalker"
MENU 10000
FONT 8, "MS Sans Serif"
BEGIN
CONTROL "Tree1",cTreeControl,"SysTreeView32",TVS_HASBUTTONS |
TVS_HASLINES | TVS_LINESATROOT | TVS_DISABLEDRAGDROP |
WS_BORDER | WS_TABSTOP,4,0,283,214
END
EXPLORERWINDOW DIALOG DISCARDABLE 160, 78, 294, 214
STYLE WS_MINIMIZEBOX | WS_MAXIMIZEBOX | WS_VISIBLE | WS_CAPTION | WS_SYSMENU |
WS_THICKFRAME
CAPTION "Java Accessibility Information"
MENU 10000
FONT 8, "MS Sans Serif"
BEGIN
EDITTEXT cAccessInfoText,4,0,283,214,ES_MULTILINE | ES_AUTOVSCROLL |
ES_READONLY | WS_VSCROLL
END
#ifdef APSTUDIO_INVOKED
/////////////////////////////////////////////////////////////////////////////
//
// TEXTINCLUDE
//
1 TEXTINCLUDE DISCARDABLE
BEGIN
"jaccesswalkerResource.h\0"
END
2 TEXTINCLUDE DISCARDABLE
BEGIN
"#define APSTUDIO_HIDDEN_SYMBOLS\r\n"
"#include ""windows.h""\r\n"
"#undef APSTUDIO_HIDDEN_SYMBOLS\r\n"
"\0"
END
3 TEXTINCLUDE DISCARDABLE
BEGIN
"\r\n"
"\0"
END
#endif // APSTUDIO_INVOKED
/////////////////////////////////////////////////////////////////////////////
//
// DESIGNINFO
//
#ifdef APSTUDIO_INVOKED
GUIDELINES DESIGNINFO DISCARDABLE
BEGIN
"JACCESSWALKERWINDOW", DIALOG
BEGIN
LEFTMARGIN, 4
RIGHTMARGIN, 287
END
"ACCESSINFOWINDOW", DIALOG
BEGIN
LEFTMARGIN, 4
RIGHTMARGIN, 287
END
END
#endif // APSTUDIO_INVOKED
/////////////////////////////////////////////////////////////////////////////
//
// Menu
//
JACCESSWALKERMENU MENU DISCARDABLE
BEGIN
POPUP "File"
BEGIN
MENUITEM "Refresh Tree", cRefreshTreeItem
MENUITEM SEPARATOR
MENUITEM "Exit", cExitMenuItem
END
POPUP "Panels"
BEGIN
MENUITEM "Display Accessibility Information", cAPIMenuItem
END
END
PopupMenu MENU
{
POPUP ""
{
MENUITEM "Display Accessibility Information", cAPIPopupItem
}
}
#endif // English (U.S.) resources
/////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////
//
// Version
//
// Need 2 defines so macro argument to XSTR will get expanded before quoting.
#define XSTR(x) STR(x)
#define STR(x) #x
VS_VERSION_INFO VERSIONINFO
FILEVERSION JDK_FVER
PRODUCTVERSION JDK_FVER
FILEFLAGSMASK 0x3fL
#ifdef _DEBUG
FILEFLAGS 0x1L
#else
FILEFLAGS 0x0L
#endif
// FILEOS 0x4 is Win32, 0x40004 is Win32 NT only
FILEOS 0x4L
// FILETYPE should be 0x1 for .exe and 0x2 for .dll
FILETYPE JDK_FTYPE
FILESUBTYPE 0x0L
BEGIN
BLOCK "StringFileInfo"
BEGIN
BLOCK "000004b0"
BEGIN
VALUE "CompanyName", XSTR(JDK_COMPANY) "\0"
VALUE "FileDescription", XSTR(JDK_COMPONENT) "\0"
VALUE "FileVersion", XSTR(JDK_VER) "\0"
VALUE "Full Version", XSTR(JDK_VERSION_STRING) "\0"
VALUE "InternalName", XSTR(JDK_INTERNAL_NAME) "\0"
VALUE "LegalCopyright", XSTR(JDK_COPYRIGHT) "\0"
VALUE "OriginalFilename", XSTR(JDK_FNAME) "\0"
VALUE "ProductName", XSTR(JDK_NAME) "\0"
VALUE "ProductVersion", XSTR(JDK_VER) "\0"
END
END
BLOCK "VarFileInfo"
BEGIN
VALUE "Translation", 0x0, 1200
END
END
| {
"pile_set_name": "Github"
} |
/** @file
Copyright (c) 2004 - 2014, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials are licensed and made available under
the terms and conditions of the BSD License that accompanies this distribution.
The full text of the license may be found at
http://opensource.org/licenses/bsd-license.php.
THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
Module Name:
Dimm.c
Abstract:
PPI for reading SPD modules on DIMMs.
--*/
//
// Header Files
//
#include "Platformearlyinit.h"
#define DIMM_SOCKETS 4 // Total number of DIMM sockets allowed on
// the platform
#define DIMM_SEGMENTS 1 // Total number of Segments Per DIMM.
#define MEMORY_CHANNELS 2 // Total number of memory channels
// populated on the system board
//
// Prototypes
//
EFI_STATUS
EFIAPI
GetDimmState (
IN EFI_PEI_SERVICES **PeiServices,
IN PEI_PLATFORM_DIMM_PPI *This,
IN UINT8 Dimm,
OUT PEI_PLATFORM_DIMM_STATE *State
);
EFI_STATUS
EFIAPI
SetDimmState (
IN EFI_PEI_SERVICES **PeiServices,
IN PEI_PLATFORM_DIMM_PPI *This,
IN UINT8 Dimm,
IN PEI_PLATFORM_DIMM_STATE *State
);
EFI_STATUS
EFIAPI
ReadSpd (
IN EFI_PEI_SERVICES **PeiServices,
IN PEI_PLATFORM_DIMM_PPI *This,
IN UINT8 Dimm,
IN UINT8 Offset,
IN UINTN Count,
IN OUT UINT8 *Buffer
);
static PEI_PLATFORM_DIMM_PPI mGchDimmPpi = {
DIMM_SOCKETS,
DIMM_SEGMENTS,
MEMORY_CHANNELS,
GetDimmState,
SetDimmState,
ReadSpd
};
static EFI_PEI_PPI_DESCRIPTOR mPpiPlatformDimm = {
(EFI_PEI_PPI_DESCRIPTOR_PPI | EFI_PEI_PPI_DESCRIPTOR_TERMINATE_LIST),
&gPeiPlatformDimmPpiGuid,
&mGchDimmPpi
};
//
// Functions
//
/**
This function returns the current state of a single DIMM. Present indicates
that the DIMM slot is physically populated. Disabled indicates that the DIMM
should not be used.
@param PeiServices PEI services table pointer
@param This PPI pointer
@param Dimm DIMM to read from
@param State Pointer to a return buffer to be updated with the current state
of the DIMM
@retval EFI_SUCCESS The function completed successfully.
**/
EFI_STATUS
EFIAPI
GetDimmState (
IN EFI_PEI_SERVICES **PeiServices,
IN PEI_PLATFORM_DIMM_PPI *This,
IN UINT8 Dimm,
OUT PEI_PLATFORM_DIMM_STATE *State
)
{
EFI_STATUS Status;
UINT8 Buffer;
PEI_ASSERT (PeiServices, (Dimm < This->DimmSockets));
//
// A failure here does not necessarily mean that no DIMM is present.
// Read a single byte. All we care about is the return status.
//
Status = ReadSpd (
PeiServices,
This,
Dimm,
0,
1,
&Buffer
);
if (EFI_ERROR (Status)) {
State->Present = 0;
} else {
State->Present = 1;
}
//
// BUGBUG: Update to check platform variable when it is available
//
State->Disabled = 0;
State->Reserved = 0;
return EFI_SUCCESS;
}
/**
This function updates the state of a single DIMM.
@param PeiServices PEI services table pointer
@param This PPI pointer
@param Dimm DIMM to set state for
@param State Pointer to the state information to set.
@retval EFI_SUCCESS The function completed successfully.
@retval EFI_UNSUPPORTED The function is not supported.
**/
EFI_STATUS
EFIAPI
SetDimmState (
IN EFI_PEI_SERVICES **PeiServices,
IN PEI_PLATFORM_DIMM_PPI *This,
IN UINT8 Dimm,
IN PEI_PLATFORM_DIMM_STATE *State
)
{
return EFI_UNSUPPORTED;
}
/**
This function reads SPD information from a DIMM.
PeiServices PEI services table pointer
This PPI pointer
Dimm DIMM to read from
Offset Offset in DIMM
Count Number of bytes
Buffer Return buffer
@param EFI_SUCCESS The function completed successfully.
@param EFI_DEVICE_ERROR The DIMM being accessed reported a device error,
does not have an SPD module, or is not installed in
the system.
@retval EFI_TIMEOUT Time out trying to read the SPD module.
@retval EFI_INVALID_PARAMETER A parameter was outside the legal limits.
**/
EFI_STATUS
EFIAPI
ReadSpd (
IN EFI_PEI_SERVICES **PeiServices,
IN PEI_PLATFORM_DIMM_PPI *This,
IN UINT8 Dimm,
IN UINT8 Offset,
IN UINTN Count,
IN OUT UINT8 *Buffer
)
{
EFI_STATUS Status;
PEI_SMBUS_PPI *Smbus;
UINTN Index;
UINTN Index1;
EFI_SMBUS_DEVICE_ADDRESS SlaveAddress;
EFI_SMBUS_DEVICE_COMMAND Command;
UINTN Length;
Status = (**PeiServices).LocatePpi (
PeiServices,
&gPeiSmbusPpiGuid, // GUID
0, // INSTANCE
NULL, // EFI_PEI_PPI_DESCRIPTOR
&Smbus // PPI
);
ASSERT_PEI_ERROR (PeiServices, Status);
switch (Dimm) {
case 0:
SlaveAddress.SmbusDeviceAddress = SMBUS_ADDR_CH_A_1 >> 1;
break;
case 1:
SlaveAddress.SmbusDeviceAddress = SMBUS_ADDR_CH_A_2 >> 1;
break;
case 2:
SlaveAddress.SmbusDeviceAddress = SMBUS_ADDR_CH_B_1 >> 1;
break;
case 3:
SlaveAddress.SmbusDeviceAddress = SMBUS_ADDR_CH_B_2 >> 1;
break;
default:
return EFI_INVALID_PARAMETER;
}
Index = Count % 4;
if (Index != 0) {
//
// read the first serveral bytes to speed up following reading
//
for (Index1 = 0; Index1 < Index; Index1++) {
Length = 1;
Command = Offset + Index1;
Status = Smbus->Execute (
PeiServices,
Smbus,
SlaveAddress,
Command,
EfiSmbusReadByte,
FALSE,
&Length,
&Buffer[Index1]
);
if (EFI_ERROR(Status)) {
return Status;
}
}
}
//
// Now collect all the remaining bytes on 4 bytes block
//
for (; Index < Count; Index += 2) {
Command = Index + Offset;
Length = 2;
Status = Smbus->Execute (
PeiServices,
Smbus,
SlaveAddress,
Command,
EfiSmbusReadWord,
FALSE,
&Length,
&Buffer[Index]
);
if (EFI_ERROR(Status)) {
return Status;
}
Index += 2;
Command = Index + Offset;
Length = 2;
Status = Smbus->Execute (
PeiServices,
Smbus,
SlaveAddress,
Command,
EfiSmbusReadWord,
FALSE,
&Length,
&Buffer[Index]
);
if (EFI_ERROR(Status)) {
return Status;
}
}
return EFI_SUCCESS;
}
/**
This function initializes the PEIM. It simply installs the DIMM PPI.
@param FfsHeader Not used by this function
@param PeiServices Pointer to PEI services table
@retval EFI_SUCCESS The function completed successfully.
**/
EFI_STATUS
EFIAPI
PeimInitializeDimm (
IN EFI_PEI_SERVICES **PeiServices,
IN EFI_PEI_NOTIFY_DESCRIPTOR *NotifyDescriptor,
IN VOID *SmbusPpi
)
{
EFI_STATUS Status;
Status = (**PeiServices).InstallPpi (
PeiServices,
&mPpiPlatformDimm
);
ASSERT_PEI_ERROR (PeiServices, Status);
return EFI_SUCCESS;
}
| {
"pile_set_name": "Github"
} |
// Jest Snapshot v1, https://goo.gl/fbAQLP
exports[`render full-bleed row 1`] = `
<div
className="root"
style={
Object {
"backgroundColor": undefined,
"border": undefined,
"borderColor": undefined,
"borderRadius": undefined,
"borderWidth": undefined,
"marginBottom": undefined,
"marginLeft": undefined,
"marginRight": undefined,
"marginTop": undefined,
"minHeight": undefined,
"paddingBottom": undefined,
"paddingLeft": undefined,
"paddingRight": undefined,
"paddingTop": undefined,
"textAlign": undefined,
}
}
/>
`;
exports[`render row with all props configured 1`] = `
<div
className="root test-class"
style={
Object {
"backgroundAttachment": "fixed",
"backgroundColor": "red",
"backgroundImage": "url(desktop.jpg)",
"backgroundPosition": "center center",
"backgroundRepeat": "repeat",
"backgroundSize": "contain",
"border": "solid",
"borderColor": "red",
"borderRadius": "15px",
"borderWidth": "10px",
"display": "flex",
"flexDirection": "column",
"justifyContent": "center",
"marginBottom": "10px",
"marginLeft": "10px",
"marginRight": "10px",
"marginTop": "10px",
"minHeight": "200px",
"paddingBottom": "10px",
"paddingLeft": "10px",
"paddingRight": "10px",
"paddingTop": "10px",
"textAlign": "right",
}
}
>
<div
className="contained"
/>
</div>
`;
exports[`render row with mobile image displayed and parallax enabled 1`] = `
<div
className="contained"
>
<div
className="inner"
style={
Object {
"backgroundAttachment": undefined,
"backgroundColor": undefined,
"backgroundImage": "url(mobile.jpg)",
"backgroundPosition": undefined,
"backgroundRepeat": "no-repeat",
"backgroundSize": "cover",
"border": undefined,
"borderColor": undefined,
"borderRadius": undefined,
"borderWidth": undefined,
"marginBottom": undefined,
"marginLeft": undefined,
"marginRight": undefined,
"marginTop": undefined,
"minHeight": undefined,
"paddingBottom": undefined,
"paddingLeft": undefined,
"paddingRight": undefined,
"paddingTop": undefined,
"textAlign": undefined,
}
}
/>
</div>
`;
exports[`render row with no props 1`] = `
<div
className="contained"
>
<div
className="inner"
style={
Object {
"backgroundColor": undefined,
"border": undefined,
"borderColor": undefined,
"borderRadius": undefined,
"borderWidth": undefined,
"marginBottom": undefined,
"marginLeft": undefined,
"marginRight": undefined,
"marginTop": undefined,
"minHeight": undefined,
"paddingBottom": undefined,
"paddingLeft": undefined,
"paddingRight": undefined,
"paddingTop": undefined,
"textAlign": undefined,
}
}
/>
</div>
`;
| {
"pile_set_name": "Github"
} |
# Maintainer: vinszent <vinszent@vinszent.com>
_gitname=gnome-twitch
pkgname=gnome-twitch-player-backend-gstreamer-opengl
pkgver=0.4.0
pkgrel=2
pkgdesc="GStreamer OpenGL (hardware rendering) player backend for GNOME Twitch"
arch=('i686' 'x86_64')
url="https://github.com/vinszent/gnome-twitch"
license=('GPL3')
makedepends=('git' 'meson')
depends=('gnome-twitch' 'gtk3' 'gstreamer' 'gst-libav' 'gst-plugins-base' 'gst-plugins-good' 'gst-plugins-bad' 'libpeas' 'gobject-introspection')
source=("https://github.com/Ippytraxx/gnome-twitch/archive/v${pkgver}.tar.gz"
"0001-Fix-typo-in-Meson-build-options.patch")
md5sums=('42abec672144865828a9eb4764037a3a'
'9efc76e74fbfd6ca20a2b474b0980002')
conflicts=('gnome-twitch-player-backend-gstreamer-opengl-git')
prepare()
{
cd "${_gitname}-${pkgver}"
patch -p1 -i ../0001-Fix-typo-in-Meson-build-options.patch
}
build()
{
cd "${_gitname}-${pkgver}"
rm -rf build
mkdir build
cd build
meson --prefix /usr --libdir lib --buildtype release \
-Dbuild-executable=false \
-Dbuild-player-backends=gstreamer-opengl ..
ninja
}
package()
{
cd "${_gitname}-${pkgver}/build"
DESTDIR="$pkgdir" ninja install
}
| {
"pile_set_name": "Github"
} |
BASH PATCH REPORT
=================
Bash-Release: 4.2
Patch-ID: bash42-034
Bug-Reported-by: "Davide Brini" <dave_br@gmx.com>
Bug-Reference-ID: <20120604164154.69781EC04B@imaps.oficinas.atrapalo.com>
Bug-Reference-URL: http://lists.gnu.org/archive/html/bug-bash/2012-06/msg00030.html
Bug-Description:
In bash-4.2, the history code would inappropriately add a semicolon to
multi-line compound array assignments when adding them to the history.
Patch (apply with `patch -p0'):
*** ../bash-4.2-patched/parse.y 2011-11-21 18:03:36.000000000 -0500
--- ./parse.y 2012-06-07 12:48:47.000000000 -0400
***************
*** 4900,4905 ****
--- 4916,4924 ----
return (current_command_line_count == 2 ? "\n" : "");
}
+ if (parser_state & PST_COMPASSIGN)
+ return (" ");
+
/* First, handle some special cases. */
/*(*/
/* If we just read `()', assume it's a function definition, and don't
*** ../bash-4.2-patched/patchlevel.h Sat Jun 12 20:14:48 2010
--- ./patchlevel.h Thu Feb 24 21:41:34 2011
***************
*** 26,30 ****
looks for to find the patch level (for the sccs version string). */
! #define PATCHLEVEL 33
#endif /* _PATCHLEVEL_H_ */
--- 26,30 ----
looks for to find the patch level (for the sccs version string). */
! #define PATCHLEVEL 34
#endif /* _PATCHLEVEL_H_ */
| {
"pile_set_name": "Github"
} |
---
-api-id: T:Windows.Web.Http.Headers.HttpChallengeHeaderValueCollection
-api-type: winrt class
---
<!-- Class syntax.
public class HttpChallengeHeaderValueCollection : Windows.Foundation.Collections.IIterable<Windows.Web.Http.Headers.HttpChallengeHeaderValue>, Windows.Foundation.Collections.IVector<Windows.Web.Http.Headers.HttpChallengeHeaderValue>, Windows.Foundation.IStringable, Windows.Web.Http.Headers.IHttpChallengeHeaderValueCollection
-->
# Windows.Web.Http.Headers.HttpChallengeHeaderValueCollection
## -description
Represents the value of the **Proxy-Authenticate** or **WWW-Authenticate** HTTP header on an HTTP response.
## -remarks
The HttpChallengeHeaderValueCollection represents the value of the **Proxy-Authenticate** or **WWW-Authenticate** HTTP header on an HTTP response.
The HttpChallengeHeaderValueCollection provides a collection container for instances of the [HttpChallengeHeaderValue](httpchallengeheadervalue.md) class used for authentication information used in the **Authorization**, **ProxyAuthorization**, **WWW-Authenticate**, and **Proxy-Authenticate** HTTP header values.
The [ProxyAuthenticate](httpresponseheadercollection_proxyauthenticate.md) property on [HttpResponseHeaderCollection](httpresponseheadercollection.md) returns an HttpChallengeHeaderValueCollection object. The [WwwAuthenticate](httpresponseheadercollection_wwwauthenticate.md) property on [HttpResponseHeaderCollection](httpresponseheadercollection.md) also returns an HttpChallengeHeaderValueCollection object.
### Collection member lists
For JavaScript, HttpChallengeHeaderValueCollection has the members shown in the member lists. In addition, HttpChallengeHeaderValueCollection supports members of **Array.prototype** and using an index to access items.
<!--Begin NET note for IEnumerable support-->
### Enumerating the collection in C# or Microsoft Visual Basic
You can iterate through an HttpChallengeHeaderValueCollection object in C# or Microsoft Visual Basic. In many cases, such as using **foreach** syntax, the compiler does this casting for you and you won't need to cast to `IEnumerable<HttpChallengeHeaderValue>` explicitly. If you do need to cast explicitly, for example if you want to call [GetEnumerator](/dotnet/api/system.collections.ienumerable.getenumerator), cast the collection object to [IEnumerable<T>](/dotnet/api/system.collections.generic.ienumerable-1) with an [HttpChallengeHeaderValue](httpchallengeheadervalue.md) constraint.
<!--End NET note for IEnumerable support-->
## -examples
The following sample code shows a method to get and set the **Proxy-Authenticate** HTTP header on an [HttpResponseMessage](../windows.web.http/httpresponsemessage.md) object using the properties and methods on the HttpChallengeHeaderValueCollection and [HttpChallengeHeaderValue](httpchallengeheadervalue.md) classes.
```csharp
using System;
using Windows.Web.Http;
using Windows.Web.Http.Headers;
public void DemonstrateHeaderResponseProxyAuthenticate() {
var response = new HttpResponseMessage();
// Set the header with a strong type.
response.Headers.ProxyAuthenticate.TryParseAdd("Basic");
response.Headers.ProxyAuthenticate.Add(new HttpChallengeHeaderValue("authScheme", "authToken"));
// Get the strong type out
foreach (var value in response.Headers.ProxyAuthenticate) {
System.Diagnostics.Debug.WriteLine("Proxy authenticate scheme and token: {0} {1}", value.Scheme, value.Token);
}
// The ToString() is useful for diagnostics, too.
System.Diagnostics.Debug.WriteLine("The ProxyAuthenticate ToString() results: {0}", response.Headers.ProxyAuthenticate.ToString());
}
```
## -see-also
[HttpChallengeHeaderValue](httpchallengeheadervalue.md), [HttpResponseMessage](../windows.web.http/httpresponsemessage.md), [HttpResponseHeaderCollection](httpresponseheadercollection.md), [IIterable(HttpChallengeHeaderValue)](../windows.foundation.collections/iiterable_1.md), [IStringable](../windows.foundation/istringable.md), [IVector(HttpChallengeHeaderValue)](../windows.foundation.collections/ivector_1.md), [ProxyAuthenticate](httpresponseheadercollection_proxyauthenticate.md), [WwwAuthenticate](httpresponseheadercollection_wwwauthenticate.md)
| {
"pile_set_name": "Github"
} |
--TEST--
PHPC-341: fromJSON() leaks when JSON contains array or object fields
--FILE--
<?php
require_once __DIR__ . '/../utils/tools.php';
$tests = array(
'{ "foo": "yes", "bar" : false }',
'{ "foo": "no", "array" : [ 5, 6 ] }',
'{ "foo": "no", "obj" : { "embedded" : 4.125 } }',
);
foreach ($tests as $test) {
$bson = fromJSON($test);
var_dump(toPHP($bson));
}
?>
===DONE===
<?php exit(0); ?>
--EXPECTF--
object(stdClass)#%d (2) {
["foo"]=>
string(3) "yes"
["bar"]=>
bool(false)
}
object(stdClass)#%d (2) {
["foo"]=>
string(2) "no"
["array"]=>
array(2) {
[0]=>
int(5)
[1]=>
int(6)
}
}
object(stdClass)#%d (2) {
["foo"]=>
string(2) "no"
["obj"]=>
object(stdClass)#%d (1) {
["embedded"]=>
float(4.125)
}
}
===DONE===
| {
"pile_set_name": "Github"
} |
/**
* Represents a way. A way is automatically added to the geohash index when
* it is instantiated.
*
* @augments Feature
*/
var Way = Class.create(Feature,
/**
* @lends Way#
*/
{
__type__: 'Way',
/**
* Sets up this way's properties and adds it to the geohash index
* @param {Object} data A set of properties that will be copied to this Way.
* @constructs
*/
initialize: function($super, data) {
$super()
geohash = geohash || true
/**
* Number of frames this Way has existed for
* @type Number
*/
this.age = 0
/**
* Timestamp of way creation
* @type Date
*/
this.birthdate = new Date
/**
* If true, this way will have a red border
* @type Boolean
*/
this.highlight = false
/**
* Nodes that make up this Way
* @type Node[]
*/
this.nodes = []
/**
* If true, this way will be treated a a polygon and filled when drawn
* @type Boolean
*/
this.closed_poly = false
this.is_hovered = false
Object.extend(this, data)
if (this.nodes.length > 1 && this.nodes.first().x == this.nodes.last().x &&
this.nodes.first().y == this.nodes.last().y)
this.closed_poly = true
if (this.tags.get('natural') == "coastline") this.closed_poly = true
if (this.closed_poly) {
var centroid = Geometry.poly_centroid(this.nodes)
this.x = centroid[0]*2
this.y = centroid[1]*2
} else {
// attempt to make letters follow line segments:
this.x = (this.middle_segment()[0].x+this.middle_segment()[1].x)/2
this.y = (this.middle_segment()[0].y+this.middle_segment()[1].y)/2
}
this.area = Geometry.poly_area(this.nodes)
this.bbox = Geometry.calculate_bounding_box(this.nodes)
// calculate longest dimension to file in a correct geohash:
// Can we do this in lon/lat only, i.e. save some calculation?
this.width = Math.abs(Projection.x_to_lon(this.bbox[1])-Projection.x_to_lon(this.bbox[3]))
this.height = Math.abs(Projection.y_to_lat(this.bbox[0])-Projection.y_to_lat(this.bbox[2]))
Feature.ways.set(this.id,this)
if (this.coastline) {
Coastline.coastlines.push(this)
} else {
Style.parse_styles(this,Style.styles.way)
Geohash.put_object(this)
}
},
/**
* for coastlines, the [prev,next] way in the series
*/
neighbors: [false,false],
/**
* Adds a reference to itself into the 'chain' array and calls coastline_chain on the next or prev member
* @param {Array} chain The array representing the chain of connected Ways
* @param {Boolean} prev If the call is going to the prev member
* @param {Boolean} next If the call is going to the next member
*/
chain: function(chain,prev,next) {
// check if this way has appeared in the chain already:
var uniq = true
chain.each(function(way) {
if (way.id == this.id) uniq = false
},this)
if (uniq) {
if (prev) chain.push(this)
else chain.unshift(this)
$l(chain.length + ","+prev+next)
if (prev && this.neighbors[0]) { // this is the initial call
this.neighbors[0].chain(chain,true,false)
}
if (next && this.neighbors[1]) {
this.neighbors[1].chain(chain,false,true)
}
}
return chain
},
/**
* Finds the middle-most line segment
* @return a tuple of nodes
* @type Node[]
*/
middle_segment: function() {
if (this.nodes.length == 1) {
return [this.nodes[0], this.nodes[0]]
}
else if (this.nodes.length == 2) {
return [this.nodes[0], this.nodes[1]]
}
else {
return [this.nodes[Math.floor(this.nodes.length/2)],
this.nodes[Math.floor(this.nodes.length/2)+1]]
}
},
/**
* Finds the angle of the middle-most line segment
* @return The angle, in radians
* @type Number
*/
middle_segment_angle: function() {
var segment = this.middle_segment()
if (segment[1]) {
var _x = segment[0].x-segment[1].x
var _y = segment[0].y-segment[1].y
return (Math.tan(_y/_x))
} else return 0
},
/**
* Draws this way on the canvas
*/
draw: function($super) {
$super()
this.age += 1;
},
/**
* Applies hover and mouseDown styles
*/
style: function() {
if (this.hover || this.menu) {
this.is_hovered = this.is_inside(Map.pointer_x(), Map.pointer_y())
}
// hover
if (this.hover && this.is_hovered) {
if (!this.hover_styles_applied) {
Mouse.hovered_features.push(this)
this.apply_hover_styles()
this.hover_styles_applied = true
}
if (!Object.isUndefined(this.hover.action)) this.hover.action.bind(this)()
}
else if (this.hover_styles_applied) {
Mouse.hovered_features = Mouse.hovered_features.without(this)
this.remove_hover_styles()
this.hover_styles_applied = false
}
// mouseDown
if (this.mouseDown && Mouse.down == true && this.is_hovered) {
if (!this.click_styles_applied) {
this.apply_click_styles()
this.click_styles_applied = true
}
if (!Object.isUndefined(this.mouseDown.action)) this.mouseDown.action.bind(this)()
}
else if (this.click_styles_applied) {
this.remove_click_styles()
this.click_styles_applied = false
}
if (this.menu) {
if (this.is_hovered) {
this.menu.each(function(id) {
ContextMenu.cond_items[id].avail = true
ContextMenu.cond_items[id].context = this
}, this)
}
else {
this.menu.each(function(id) {
if (ContextMenu.cond_items[id].context == this) {
ContextMenu.cond_items[id].avail = false
ContextMenu.cond_items[id].context = window
}
}, this)
}
}
},
/**
* Draws on the canvas to display this way
*/
shape: function() {
$C.opacity(1)
// fade in after load:
if (Object.isUndefined(this.opacity)) this.opacity = 1
if ((Glop.date - this.birthdate) < 4000) {
$C.opacity(Math.max(0,0.1+this.opacity*((Glop.date - this.birthdate)/4000)))
} else {
$C.opacity(this.opacity)
}
$C.begin_path()
if (Config.distort) $C.move_to(this.nodes[0].x,this.nodes[0].y+Math.max(0,75-Geometry.distance(this.nodes[0].x,this.nodes[0].y,Map.pointer_x(),Map.pointer_y())/4))
else $C.move_to(this.nodes[0].x,this.nodes[0].y)
if (Map.resolution == 0) Map.resolution = 1
this.nodes.each(function(node,index){
if ((index % Map.resolution == 0) || index == this.nodes.length-1 || this.nodes.length <= 30) {
// eye candy demo:
if (Config.distort) $C.line_to(node.x,node.y+Math.max(0,75-Geometry.distance(node.x,node.y,Map.pointer_x(),Map.pointer_y())/4))
else $C.line_to(node.x,node.y)
}
},this)
// fill the polygon if the beginning and end nodes are the same.
// we'll have to change this for open polys, like coastlines
if (this.outlineColor && this.outlineWidth) $C.outline(this.outlineColor,this.outlineWidth)
else $C.stroke()
if (this.closed_poly) $C.fill()
if (this.image) {
if (!this.image.src) {
var src = this.image
this.image = new Image()
this.image.src = src
} else if (this.image.width > 0) {
$C.draw_image(this.image, this.x-this.image.width/2, this.y-this.image.height/2)
}
}
// show bboxes for ways:
// $C.line_width(1)
// $C.stroke_style('red')
// $C.stroke_rect(this.bbox[1],this.bbox[0],this.bbox[3]-this.bbox[1],this.bbox[2]-this.bbox[0])
},
apply_default_styles: function($super) {
$super()
this.outline_color = null
this.outline_width = 0
},
refresh_styles: function() {
this.apply_default_styles()
Style.parse_styles(this, Style.styles.way)
},
is_inside: function(x, y) {
if (this.closed_poly) {
return Geometry.is_point_in_poly(this.nodes, x, y)
}
else {
width = this.lineWidth + this.outline_width
return Geometry.point_line_distance(x, y, this.nodes) < width
}
}
})
| {
"pile_set_name": "Github"
} |
tbx.tbx|{68a4ede6-8f63-44f2-803e-65f770e709e1}|#106|80|#109|{68a4ede6-8f63-44f2-803e-65f770e709e1}|401|0|#107
| {
"pile_set_name": "Github"
} |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for nets.inception_v1."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from nets import inception
slim = tf.contrib.slim
class InceptionV3Test(tf.test.TestCase):
def testBuildClassificationNetwork(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v3(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV3/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Predictions' in end_points)
self.assertListEqual(end_points['Predictions'].get_shape().as_list(),
[batch_size, num_classes])
def testBuildBaseNetwork(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
final_endpoint, end_points = inception.inception_v3_base(inputs)
self.assertTrue(final_endpoint.op.name.startswith(
'InceptionV3/Mixed_7c'))
self.assertListEqual(final_endpoint.get_shape().as_list(),
[batch_size, 8, 8, 2048])
expected_endpoints = ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3',
'MaxPool_5a_3x3', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c']
self.assertItemsEqual(end_points.keys(), expected_endpoints)
def testBuildOnlyUptoFinalEndpoint(self):
batch_size = 5
height, width = 299, 299
endpoints = ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3',
'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3',
'MaxPool_5a_3x3', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d',
'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d',
'Mixed_6e', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c']
for index, endpoint in enumerate(endpoints):
with tf.Graph().as_default():
inputs = tf.random_uniform((batch_size, height, width, 3))
out_tensor, end_points = inception.inception_v3_base(
inputs, final_endpoint=endpoint)
self.assertTrue(out_tensor.op.name.startswith(
'InceptionV3/' + endpoint))
self.assertItemsEqual(endpoints[:index+1], end_points)
def testBuildAndCheckAllEndPointsUptoMixed7c(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3_base(
inputs, final_endpoint='Mixed_7c')
endpoints_shapes = {'Conv2d_1a_3x3': [batch_size, 149, 149, 32],
'Conv2d_2a_3x3': [batch_size, 147, 147, 32],
'Conv2d_2b_3x3': [batch_size, 147, 147, 64],
'MaxPool_3a_3x3': [batch_size, 73, 73, 64],
'Conv2d_3b_1x1': [batch_size, 73, 73, 80],
'Conv2d_4a_3x3': [batch_size, 71, 71, 192],
'MaxPool_5a_3x3': [batch_size, 35, 35, 192],
'Mixed_5b': [batch_size, 35, 35, 256],
'Mixed_5c': [batch_size, 35, 35, 288],
'Mixed_5d': [batch_size, 35, 35, 288],
'Mixed_6a': [batch_size, 17, 17, 768],
'Mixed_6b': [batch_size, 17, 17, 768],
'Mixed_6c': [batch_size, 17, 17, 768],
'Mixed_6d': [batch_size, 17, 17, 768],
'Mixed_6e': [batch_size, 17, 17, 768],
'Mixed_7a': [batch_size, 8, 8, 1280],
'Mixed_7b': [batch_size, 8, 8, 2048],
'Mixed_7c': [batch_size, 8, 8, 2048]}
self.assertItemsEqual(endpoints_shapes.keys(), end_points.keys())
for endpoint_name in endpoints_shapes:
expected_shape = endpoints_shapes[endpoint_name]
self.assertTrue(endpoint_name in end_points)
self.assertListEqual(end_points[endpoint_name].get_shape().as_list(),
expected_shape)
def testModelHasExpectedNumberOfParameters(self):
batch_size = 5
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
with slim.arg_scope(inception.inception_v3_arg_scope()):
inception.inception_v3_base(inputs)
total_params, _ = slim.model_analyzer.analyze_vars(
slim.get_model_variables())
self.assertAlmostEqual(21802784, total_params)
def testBuildEndPoints(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3(inputs, num_classes)
self.assertTrue('Logits' in end_points)
logits = end_points['Logits']
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('AuxLogits' in end_points)
aux_logits = end_points['AuxLogits']
self.assertListEqual(aux_logits.get_shape().as_list(),
[batch_size, num_classes])
self.assertTrue('Mixed_7c' in end_points)
pre_pool = end_points['Mixed_7c']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 8, 8, 2048])
self.assertTrue('PreLogits' in end_points)
pre_logits = end_points['PreLogits']
self.assertListEqual(pre_logits.get_shape().as_list(),
[batch_size, 1, 1, 2048])
def testBuildEndPointsWithDepthMultiplierLessThanOne(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')]
_, end_points_with_multiplier = inception.inception_v3(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=0.5)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(0.5 * original_depth, new_depth)
def testBuildEndPointsWithDepthMultiplierGreaterThanOne(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
_, end_points = inception.inception_v3(inputs, num_classes)
endpoint_keys = [key for key in end_points.keys()
if key.startswith('Mixed') or key.startswith('Conv')]
_, end_points_with_multiplier = inception.inception_v3(
inputs, num_classes, scope='depth_multiplied_net',
depth_multiplier=2.0)
for key in endpoint_keys:
original_depth = end_points[key].get_shape().as_list()[3]
new_depth = end_points_with_multiplier[key].get_shape().as_list()[3]
self.assertEqual(2.0 * original_depth, new_depth)
def testRaiseValueErrorWithInvalidDepthMultiplier(self):
batch_size = 5
height, width = 299, 299
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
with self.assertRaises(ValueError):
_ = inception.inception_v3(inputs, num_classes, depth_multiplier=-0.1)
with self.assertRaises(ValueError):
_ = inception.inception_v3(inputs, num_classes, depth_multiplier=0.0)
def testHalfSizeImages(self):
batch_size = 5
height, width = 150, 150
num_classes = 1000
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, end_points = inception.inception_v3(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV3/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7c']
self.assertListEqual(pre_pool.get_shape().as_list(),
[batch_size, 3, 3, 2048])
def testUnknownImageShape(self):
tf.reset_default_graph()
batch_size = 2
height, width = 299, 299
num_classes = 1000
input_np = np.random.uniform(0, 1, (batch_size, height, width, 3))
with self.test_session() as sess:
inputs = tf.placeholder(tf.float32, shape=(batch_size, None, None, 3))
logits, end_points = inception.inception_v3(inputs, num_classes)
self.assertListEqual(logits.get_shape().as_list(),
[batch_size, num_classes])
pre_pool = end_points['Mixed_7c']
feed_dict = {inputs: input_np}
tf.global_variables_initializer().run()
pre_pool_out = sess.run(pre_pool, feed_dict=feed_dict)
self.assertListEqual(list(pre_pool_out.shape), [batch_size, 8, 8, 2048])
def testUnknowBatchSize(self):
batch_size = 1
height, width = 299, 299
num_classes = 1000
inputs = tf.placeholder(tf.float32, (None, height, width, 3))
logits, _ = inception.inception_v3(inputs, num_classes)
self.assertTrue(logits.op.name.startswith('InceptionV3/Logits'))
self.assertListEqual(logits.get_shape().as_list(),
[None, num_classes])
images = tf.random_uniform((batch_size, height, width, 3))
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits, {inputs: images.eval()})
self.assertEquals(output.shape, (batch_size, num_classes))
def testEvaluation(self):
batch_size = 2
height, width = 299, 299
num_classes = 1000
eval_inputs = tf.random_uniform((batch_size, height, width, 3))
logits, _ = inception.inception_v3(eval_inputs, num_classes,
is_training=False)
predictions = tf.argmax(logits, 1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (batch_size,))
def testTrainEvalWithReuse(self):
train_batch_size = 5
eval_batch_size = 2
height, width = 150, 150
num_classes = 1000
train_inputs = tf.random_uniform((train_batch_size, height, width, 3))
inception.inception_v3(train_inputs, num_classes)
eval_inputs = tf.random_uniform((eval_batch_size, height, width, 3))
logits, _ = inception.inception_v3(eval_inputs, num_classes,
is_training=False, reuse=True)
predictions = tf.argmax(logits, 1)
with self.test_session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(predictions)
self.assertEquals(output.shape, (eval_batch_size,))
def testLogitsNotSqueezed(self):
num_classes = 25
images = tf.random_uniform([1, 299, 299, 3])
logits, _ = inception.inception_v3(images,
num_classes=num_classes,
spatial_squeeze=False)
with self.test_session() as sess:
tf.global_variables_initializer().run()
logits_out = sess.run(logits)
self.assertListEqual(list(logits_out.shape), [1, 1, 1, num_classes])
if __name__ == '__main__':
tf.test.main()
| {
"pile_set_name": "Github"
} |
#import "GPUImageSobelEdgeDetectionFilter.h"
@interface GPUImagePrewittEdgeDetectionFilter : GPUImageSobelEdgeDetectionFilter
@end
| {
"pile_set_name": "Github"
} |
should = require 'should'
sinon = require 'sinon'
shouldSinon = require 'should-sinon'
KDTabViewWithForms = require '../../../lib/components/tabs/tabviewwithforms'
describe 'KDTabViewWithForms', ->
beforeEach ->
@sinon = sinon.sandbox.create()
@o = {}
@o.forms = []
@instance = new KDTabViewWithForms @o, {}
afterEach ->
@instance.destroy()
@sinon.restore()
it 'exists', ->
KDTabViewWithForms.should.exist
describe 'constructor', ->
it 'should instantiate without any errors', ->
@instance.should.exist
| {
"pile_set_name": "Github"
} |
/**
* Copyright (c) 2008-2011 The Open Planning Project
*
* Published under the GPL license.
* See https://github.com/opengeo/gxp/raw/master/license.txt for the full text
* of the license.
*/
/**
* @requires GeoExt/widgets/grid/FeatureSelectionModel.js
* @requires GeoExt/data/FeatureStore.js
*/
/** api: (define)
* module = gxp.grid
* class = FeatureGrid
* base_link = `Ext.grid.GridPanel <http://extjs.com/deploy/dev/docs/?class=Ext.grid.GridPanel>`_
*/
Ext.namespace("gxp.grid");
/** api: constructor
* .. class:: FeatureGrid(config)
*
* Create a new grid displaying the contents of a
* ``GeoExt.data.FeatureStore`` .
*/
gxp.grid.FeatureGrid = Ext.extend(Ext.grid.GridPanel, {
/** api: config[map]
* ``OpenLayers.Map`` If provided, a layer with the features from this
* grid will be added to the map.
*/
map: null,
/** api: config[ignoreFields]
* ``Array`` of field names from the store's records that should not be
* displayed in the grid.
*/
ignoreFields: null,
/** api: config[includeFields]
* ``Array`` of field names from the store's records that should be
* displayed in the grid. All other fields will be ignored.
*/
includeFields: null,
/** api: config[fieldVisibility]
* ``Object`` Property name/visibility name pairs. Optional. If specified,
* only columns with a value of true will be initially shown.
*/
/** api: config[propertyNames]
* ``Object`` Property name/display name pairs. If specified, the display
* name will be shown as column header instead of the property name.
*/
/** api: config[customRenderers]
* ``Object`` Property name/renderer pairs. If specified for a field name,
* the custom renderer will be used instead of the type specific one.
*/
/** api: config[customEditors]
* ``Object`` Property name/editor pairs. If specified for a field name,
* the custom editor will be used instead of the standard textfield.
*/
/** api: config[columnConfig]
* ``Object`` Property name/config pairs. Any additional config that
* should be used on the column, such as making a column non-editable
* by specifying editable: false
*/
/** api: config[layer]
* ``OpenLayers.Layer.Vector``
* The vector layer that will be synchronized with the layer store.
* If the ``map`` config property is provided, this value will be ignored.
*/
/** api: config[schema]
* ``GeoExt.data.AttributeStore``
* Optional schema for the grid. If provided, appropriate field
* renderers (e.g. for date or boolean fields) will be used.
*/
/** api: config[dateFormat]
* ``String`` Date format. Default is the value of
* ``Ext.form.DateField.prototype.format``.
*/
/** api: config[timeFormat]
* ``String`` Time format. Default is the value of
* ``Ext.form.TimeField.prototype.format``.
*/
/** private: property[layer]
* ``OpenLayers.Layer.Vector`` layer displaying features from this grid's
* store
*/
layer: null,
/** api: config[columnsSortable]
* ``Boolean`` Should fields in the grid be sortable? Default is true.
*/
columnsSortable: true,
/** api: config[columnmenuDisabled]
* ``Boolean`` Should the column menu be disabled? Default is false.
*/
columnMenuDisabled: false,
/** api: method[initComponent]
* Initializes the FeatureGrid.
*/
initComponent: function(){
this.ignoreFields = ["feature", "state", "fid"].concat(this.ignoreFields);
if(this.store) {
this.cm = this.createColumnModel(this.store);
// layer automatically added if map provided, otherwise check for
// layer in config
if(this.map) {
this.layer = new OpenLayers.Layer.Vector(this.id + "_layer");
this.map.addLayer(this.layer);
}
} else {
this.store = new Ext.data.Store();
this.cm = new Ext.grid.ColumnModel({
columns: []
});
}
if(this.layer) {
this.sm = this.sm || new GeoExt.grid.FeatureSelectionModel({
layerFromStore: false,
layer: this.layer
});
if(this.store instanceof GeoExt.data.FeatureStore) {
this.store.bind(this.layer);
}
}
if (!this.dateFormat) {
this.dateFormat = Ext.form.DateField.prototype.format;
}
if (!this.timeFormat) {
this.timeFormat = Ext.form.TimeField.prototype.format;
}
gxp.grid.FeatureGrid.superclass.initComponent.call(this);
},
/** private: method[onDestroy]
* Clean up anything created here before calling super onDestroy.
*/
onDestroy: function() {
if(this.initialConfig && this.initialConfig.map &&
!this.initialConfig.layer) {
// we created the layer, let's destroy it
this.layer.destroy();
delete this.layer;
}
gxp.grid.FeatureGrid.superclass.onDestroy.apply(this, arguments);
},
/** api: method[setStore]
* :arg store: ``GeoExt.data.FeatureStore``
* :arg schema: ``GeoExt.data.AttributeStore`` Optional schema to
* determine appropriate field renderers for the grid.
*
* Sets the store for this grid, reconfiguring the column model
*/
setStore: function(store, schema) {
if (schema) {
this.schema = schema;
}
if (store) {
if(this.store instanceof GeoExt.data.FeatureStore) {
this.store.unbind();
}
if(this.layer) {
this.layer.destroyFeatures();
store.bind(this.layer);
}
this.reconfigure(store, this.createColumnModel(store));
} else {
this.reconfigure(
new Ext.data.Store(),
new Ext.grid.ColumnModel({columns: []}));
}
},
/** api: method[getColumns]
* :arg store: ``GeoExt.data.FeatureStore``
* :return: ``Array``
*
* Gets the configuration for the column model.
*/
getColumns: function(store) {
function getRenderer(format) {
return function(value) {
//TODO When http://trac.osgeo.org/openlayers/ticket/3131
// is resolved, change the 5 lines below to
// return value.format(format);
var date = value;
if (typeof value == "string") {
date = Date.parseDate(value.replace(/Z$/, ""), "c");
}
return date ? date.format(format) : value;
};
}
var columns = [],
customEditors = this.customEditors || {},
customRenderers = this.customRenderers || {},
name, type, xtype, format, renderer;
(this.schema || store.fields).each(function(f) {
if (this.schema) {
name = f.get("name");
type = f.get("type").split(":").pop();
format = null;
switch (type) {
case "date":
format = this.dateFormat;
break;
case "datetime":
format = format ? format : this.dateFormat + " " + this.timeFormat;
xtype = undefined;
renderer = getRenderer(format);
break;
case "boolean":
xtype = "booleancolumn";
break;
case "string":
xtype = "gridcolumn";
break;
default:
xtype = "numbercolumn";
break;
}
} else {
name = f.name;
}
if (this.ignoreFields.indexOf(name) === -1 &&
(this.includeFields === null || this.includeFields.indexOf(name) >= 0)) {
var columnConfig = this.columnConfig ? this.columnConfig[name] : null;
columns.push(Ext.apply({
dataIndex: name,
hidden: this.fieldVisibility ?
(!this.fieldVisibility[name]) : false,
header: this.propertyNames ?
(this.propertyNames[name] || name) : name,
sortable: this.columnsSortable,
menuDisabled: this.columnMenuDisabled,
xtype: xtype,
editor: customEditors[name] || {
xtype: 'textfield'
},
format: format,
renderer: customRenderers[name] ||
(xtype ? undefined : renderer)
}, columnConfig));
}
}, this);
return columns;
},
/** private: method[createColumnModel]
* :arg store: ``GeoExt.data.FeatureStore``
* :return: ``Ext.grid.ColumnModel``
*/
createColumnModel: function(store) {
var columns = this.getColumns(store);
return new Ext.grid.ColumnModel(columns);
}
});
/** api: xtype = gxp_featuregrid */
Ext.reg('gxp_featuregrid', gxp.grid.FeatureGrid);
| {
"pile_set_name": "Github"
} |
{
"swagger": "2.0",
"info": {
"title": "DnsManagementClient",
"description": "The DNS Management Client.",
"version": "2016-04-01"
},
"host": "management.azure.com",
"schemes": [
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/dnsZones/{zoneName}/{recordType}/{relativeRecordSetName}": {
"patch": {
"tags": [
"RecordSets"
],
"operationId": "RecordSets_Update",
"description": "Updates a record set within a DNS zone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "relativeRecordSetName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the record set, relative to the name of the zone.",
"x-ms-skip-url-encoding": true
},
{
"name": "recordType",
"in": "path",
"required": true,
"type": "string",
"description": "The type of DNS record in this record set.",
"enum": [
"A",
"AAAA",
"CNAME",
"MX",
"NS",
"PTR",
"SOA",
"SRV",
"TXT"
],
"x-ms-enum": {
"name": "RecordType",
"modelAsString": false
}
},
{
"name": "parameters",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/RecordSet"
},
"description": "Parameters supplied to the Update operation."
},
{
"name": "If-Match",
"in": "header",
"required": false,
"type": "string",
"x-ms-client-name": "IfMatch",
"description": "The etag of the record set. Omit this value to always overwrite the current record set. Specify the last-seen etag value to prevent accidentally overwritting concurrent changes."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "The record set has been updated.",
"schema": {
"$ref": "#/definitions/RecordSet"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
}
},
"put": {
"tags": [
"RecordSets"
],
"operationId": "RecordSets_CreateOrUpdate",
"description": "Creates or updates a record set within a DNS zone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "relativeRecordSetName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the record set, relative to the name of the zone.",
"x-ms-skip-url-encoding": true
},
{
"name": "recordType",
"in": "path",
"required": true,
"type": "string",
"description": "The type of DNS record in this record set. Record sets of type SOA can be updated but not created (they are created when the DNS zone is created).",
"enum": [
"A",
"AAAA",
"CNAME",
"MX",
"NS",
"PTR",
"SOA",
"SRV",
"TXT"
],
"x-ms-enum": {
"name": "RecordType",
"modelAsString": false
}
},
{
"name": "parameters",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/RecordSet"
},
"description": "Parameters supplied to the CreateOrUpdate operation."
},
{
"name": "If-Match",
"in": "header",
"required": false,
"type": "string",
"x-ms-client-name": "IfMatch",
"description": "The etag of the record set. Omit this value to always overwrite the current record set. Specify the last-seen etag value to prevent accidentally overwritting any concurrent changes."
},
{
"name": "If-None-Match",
"in": "header",
"required": false,
"type": "string",
"x-ms-client-name": "IfNoneMatch",
"description": "Set to '*' to allow a new record set to be created, but to prevent updating an existing record set. Other values will be ignored."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"201": {
"description": "The record set has been created.",
"schema": {
"$ref": "#/definitions/RecordSet"
}
},
"200": {
"description": "The record set has been updated.",
"schema": {
"$ref": "#/definitions/RecordSet"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
}
},
"delete": {
"tags": [
"RecordSets"
],
"operationId": "RecordSets_Delete",
"description": "Deletes a record set from a DNS zone. This operation cannot be undone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "relativeRecordSetName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the record set, relative to the name of the zone.",
"x-ms-skip-url-encoding": true
},
{
"name": "recordType",
"in": "path",
"required": true,
"type": "string",
"description": "The type of DNS record in this record set. Record sets of type SOA cannot be deleted (they are deleted when the DNS zone is deleted).",
"enum": [
"A",
"AAAA",
"CNAME",
"MX",
"NS",
"PTR",
"SOA",
"SRV",
"TXT"
],
"x-ms-enum": {
"name": "RecordType",
"modelAsString": false
}
},
{
"name": "If-Match",
"in": "header",
"required": false,
"type": "string",
"x-ms-client-name": "IfMatch",
"description": "The etag of the record set. Omit this value to always delete the current record set. Specify the last-seen etag value to prevent accidentally deleting any concurrent changes."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"204": {
"description": "The record set was not found."
},
"200": {
"description": "The record set has been deleted."
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
}
},
"get": {
"tags": [
"RecordSets"
],
"operationId": "RecordSets_Get",
"description": "Gets a record set.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "relativeRecordSetName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the record set, relative to the name of the zone.",
"x-ms-skip-url-encoding": true
},
{
"name": "recordType",
"in": "path",
"required": true,
"type": "string",
"description": "The type of DNS record in this record set.",
"enum": [
"A",
"AAAA",
"CNAME",
"MX",
"NS",
"PTR",
"SOA",
"SRV",
"TXT"
],
"x-ms-enum": {
"name": "RecordType",
"modelAsString": false
}
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "Success.",
"schema": {
"$ref": "#/definitions/RecordSet"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
}
}
},
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/dnsZones/{zoneName}/{recordType}": {
"get": {
"tags": [
"RecordSets"
],
"operationId": "RecordSets_ListByType",
"description": "Lists the record sets of a specified type in a DNS zone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "recordType",
"in": "path",
"required": true,
"type": "string",
"description": "The type of record sets to enumerate.",
"enum": [
"A",
"AAAA",
"CNAME",
"MX",
"NS",
"PTR",
"SOA",
"SRV",
"TXT"
],
"x-ms-enum": {
"name": "RecordType",
"modelAsString": false
}
},
{
"name": "$top",
"in": "query",
"required": false,
"type": "integer",
"format": "int32",
"description": "The maximum number of record sets to return. If not specified, returns up to 100 record sets."
},
{
"name": "$recordsetnamesuffix",
"in": "query",
"required": false,
"type": "string",
"description": "The suffix label of the record set name that has to be used to filter the record set enumerations. If this parameter is specified, Enumeration will return only records that end with .<recordSetNameSuffix>"
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "Success.",
"schema": {
"$ref": "#/definitions/RecordSetListResult"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
},
"x-ms-pageable": {
"nextLinkName": "nextLink"
}
}
},
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/dnsZones/{zoneName}/recordsets": {
"get": {
"tags": [
"RecordSets"
],
"operationId": "RecordSets_ListByDnsZone",
"description": "Lists all record sets in a DNS zone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "$top",
"in": "query",
"required": false,
"type": "integer",
"format": "int32",
"description": "The maximum number of record sets to return. If not specified, returns up to 100 record sets."
},
{
"name": "$recordsetnamesuffix",
"in": "query",
"required": false,
"type": "string",
"description": "The suffix label of the record set name that has to be used to filter the record set enumerations. If this parameter is specified, Enumeration will return only records that end with .<recordSetNameSuffix>"
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "Success.",
"schema": {
"$ref": "#/definitions/RecordSetListResult"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
},
"x-ms-pageable": {
"nextLinkName": "nextLink"
}
}
},
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/dnsZones/{zoneName}": {
"put": {
"tags": [
"Zones"
],
"operationId": "Zones_CreateOrUpdate",
"description": "Creates or updates a DNS zone. Does not modify DNS records within the zone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "parameters",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/Zone"
},
"description": "Parameters supplied to the CreateOrUpdate operation."
},
{
"name": "If-Match",
"in": "header",
"required": false,
"type": "string",
"x-ms-client-name": "IfMatch",
"description": "The etag of the DNS zone. Omit this value to always overwrite the current zone. Specify the last-seen etag value to prevent accidentally overwritting any concurrent changes."
},
{
"name": "If-None-Match",
"in": "header",
"required": false,
"type": "string",
"x-ms-client-name": "IfNoneMatch",
"description": "Set to '*' to allow a new DNS zone to be created, but to prevent updating an existing zone. Other values will be ignored."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "The DNS zone has been updated.",
"schema": {
"$ref": "#/definitions/Zone"
}
},
"201": {
"description": "The DNS zone has been created.",
"schema": {
"$ref": "#/definitions/Zone"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
}
},
"delete": {
"tags": [
"Zones"
],
"operationId": "Zones_Delete",
"description": "Deletes a DNS zone. WARNING: All DNS records in the zone will also be deleted. This operation cannot be undone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"name": "If-Match",
"in": "header",
"required": false,
"type": "string",
"x-ms-client-name": "IfMatch",
"description": "The etag of the DNS zone. Omit this value to always delete the current zone. Specify the last-seen etag value to prevent accidentally deleting any concurrent changes."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"204": {
"description": "The DNS zone was not found."
},
"202": {
"description": "The DNS zone delete operation has been accepted and will complete asynchronously."
},
"200": {
"description": "The DNS zone has been deleted.",
"schema": {
"$ref": "#/definitions/ZoneDeleteResult"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
},
"x-ms-long-running-operation": true
},
"get": {
"tags": [
"Zones"
],
"operationId": "Zones_Get",
"description": "Gets a DNS zone. Retrieves the zone properties, but not the record sets within the zone.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "zoneName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the DNS zone (without a terminating dot)."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "Success.",
"schema": {
"$ref": "#/definitions/Zone"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
}
}
},
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/dnsZones": {
"get": {
"tags": [
"Zones"
],
"operationId": "Zones_ListByResourceGroup",
"description": "Lists the DNS zones within a resource group.",
"parameters": [
{
"name": "resourceGroupName",
"in": "path",
"required": true,
"type": "string",
"description": "The name of the resource group."
},
{
"name": "$top",
"in": "query",
"required": false,
"type": "integer",
"format": "int32",
"description": "The maximum number of record sets to return. If not specified, returns up to 100 record sets."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "Success.",
"schema": {
"$ref": "#/definitions/ZoneListResult"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
},
"x-ms-pageable": {
"nextLinkName": "nextLink"
}
}
},
"/subscriptions/{subscriptionId}/providers/Microsoft.Network/dnszones": {
"get": {
"tags": [
"Zones"
],
"operationId": "Zones_List",
"description": "Lists the DNS zones in all resource groups in a subscription.",
"parameters": [
{
"name": "$top",
"in": "query",
"required": false,
"type": "integer",
"format": "int32",
"description": "The maximum number of DNS zones to return. If not specified, returns up to 100 zones."
},
{
"$ref": "#/parameters/ApiVersionParameter"
},
{
"$ref": "#/parameters/SubscriptionIdParameter"
}
],
"responses": {
"200": {
"description": "Success.",
"schema": {
"$ref": "#/definitions/ZoneListResult"
}
},
"default": {
"description": "Default response. It will be deserialized as per the Error definition.",
"schema": {
"$ref": "#/definitions/CloudError"
}
}
},
"x-ms-pageable": {
"nextLinkName": "nextLink"
}
}
}
},
"definitions": {
"ARecord": {
"properties": {
"ipv4Address": {
"type": "string",
"description": "The IPv4 address of this A record."
}
},
"description": "An A record."
},
"AaaaRecord": {
"properties": {
"ipv6Address": {
"type": "string",
"description": "The IPv6 address of this AAAA record."
}
},
"description": "An AAAA record."
},
"MxRecord": {
"properties": {
"preference": {
"type": "integer",
"format": "int32",
"description": "The preference value for this MX record."
},
"exchange": {
"type": "string",
"description": "The domain name of the mail host for this MX record."
}
},
"description": "An MX record."
},
"NsRecord": {
"properties": {
"nsdname": {
"type": "string",
"description": "The name server name for this NS record."
}
},
"description": "An NS record."
},
"PtrRecord": {
"properties": {
"ptrdname": {
"type": "string",
"description": "The PTR target domain name for this PTR record."
}
},
"description": "A PTR record."
},
"SrvRecord": {
"properties": {
"priority": {
"type": "integer",
"format": "int32",
"description": "The priority value for this SRV record."
},
"weight": {
"type": "integer",
"format": "int32",
"description": "The weight value for this SRV record."
},
"port": {
"type": "integer",
"format": "int32",
"description": "The port value for this SRV record."
},
"target": {
"type": "string",
"description": "The target domain name for this SRV record."
}
},
"description": "An SRV record."
},
"TxtRecord": {
"properties": {
"value": {
"type": "array",
"items": {
"type": "string"
},
"description": "The text value of this TXT record."
}
},
"description": "A TXT record."
},
"CnameRecord": {
"properties": {
"cname": {
"type": "string",
"description": "The canonical name for this CNAME record."
}
},
"description": "A CNAME record."
},
"SoaRecord": {
"properties": {
"host": {
"type": "string",
"description": "The domain name of the authoritative name server for this SOA record."
},
"email": {
"type": "string",
"description": "The email contact for this SOA record."
},
"serialNumber": {
"type": "integer",
"format": "int64",
"description": "The serial number for this SOA record."
},
"refreshTime": {
"type": "integer",
"format": "int64",
"description": "The refresh value for this SOA record."
},
"retryTime": {
"type": "integer",
"format": "int64",
"description": "The retry time for this SOA record."
},
"expireTime": {
"type": "integer",
"format": "int64",
"description": "The expire time for this SOA record."
},
"minimumTTL": {
"type": "integer",
"format": "int64",
"x-ms-client-name": "minimumTtl",
"description": "The minimum value for this SOA record. By convention this is used to determine the negative caching duration."
}
},
"description": "An SOA record."
},
"RecordSetProperties": {
"properties": {
"metadata": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "The metadata attached to the record set."
},
"TTL": {
"type": "integer",
"format": "int64",
"description": "The TTL (time-to-live) of the records in the record set."
},
"ARecords": {
"type": "array",
"items": {
"$ref": "#/definitions/ARecord"
},
"description": "The list of A records in the record set."
},
"AAAARecords": {
"type": "array",
"x-ms-client-name": "AaaaRecords",
"items": {
"$ref": "#/definitions/AaaaRecord"
},
"description": "The list of AAAA records in the record set."
},
"MXRecords": {
"type": "array",
"x-ms-client-name": "MxRecords",
"items": {
"$ref": "#/definitions/MxRecord"
},
"description": "The list of MX records in the record set."
},
"NSRecords": {
"type": "array",
"x-ms-client-name": "NsRecords",
"items": {
"$ref": "#/definitions/NsRecord"
},
"description": "The list of NS records in the record set."
},
"PTRRecords": {
"type": "array",
"x-ms-client-name": "PtrRecords",
"items": {
"$ref": "#/definitions/PtrRecord"
},
"description": "The list of PTR records in the record set."
},
"SRVRecords": {
"type": "array",
"x-ms-client-name": "SrvRecords",
"items": {
"$ref": "#/definitions/SrvRecord"
},
"description": "The list of SRV records in the record set."
},
"TXTRecords": {
"type": "array",
"x-ms-client-name": "TxtRecords",
"items": {
"$ref": "#/definitions/TxtRecord"
},
"description": "The list of TXT records in the record set."
},
"CNAMERecord": {
"$ref": "#/definitions/CnameRecord",
"x-ms-client-name": "CnameRecord",
"description": "The CNAME record in the record set."
},
"SOARecord": {
"$ref": "#/definitions/SoaRecord",
"x-ms-client-name": "SoaRecord",
"description": "The SOA record in the record set."
}
},
"description": "Represents the properties of the records in the record set."
},
"RecordSet": {
"properties": {
"id": {
"type": "string",
"description": "The ID of the record set."
},
"name": {
"type": "string",
"description": "The name of the record set."
},
"type": {
"type": "string",
"description": "The type of the record set."
},
"etag": {
"type": "string",
"description": "The etag of the record set."
},
"properties": {
"$ref": "#/definitions/RecordSetProperties",
"x-ms-client-flatten": true,
"description": "The properties of the record set."
}
},
"description": "Describes a DNS record set (a collection of DNS records with the same name and type)."
},
"RecordSetUpdateParameters": {
"properties": {
"RecordSet": {
"$ref": "#/definitions/RecordSet",
"description": "Specifies information about the record set being updated."
}
},
"description": "Parameters supplied to update a record set."
},
"RecordSetListResult": {
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/definitions/RecordSet"
},
"description": "Information about the record sets in the response."
},
"nextLink": {
"type": "string",
"description": "The continuation token for the next page of results."
}
},
"description": "The response to a record set List operation."
},
"ZoneProperties": {
"properties": {
"maxNumberOfRecordSets": {
"type": "integer",
"format": "int64",
"description": "The maximum number of record sets that can be created in this DNS zone. This is a read-only property and any attempt to set this value will be ignored."
},
"numberOfRecordSets": {
"type": "integer",
"format": "int64",
"description": "The current number of record sets in this DNS zone. This is a read-only property and any attempt to set this value will be ignored."
},
"nameServers": {
"type": "array",
"items": {
"type": "string"
},
"description": "The name servers for this DNS zone. This is a read-only property and any attempt to set this value will be ignored.",
"readOnly": true
}
},
"description": "Represents the properties of the zone."
},
"Zone": {
"properties": {
"etag": {
"type": "string",
"description": "The etag of the zone."
},
"properties": {
"x-ms-client-flatten": true,
"$ref": "#/definitions/ZoneProperties",
"description": "The properties of the zone."
}
},
"allOf": [
{
"$ref": "#/definitions/Resource"
}
],
"description": "Describes a DNS zone."
},
"ZoneDeleteResult": {
"properties": {
"azureAsyncOperation": {
"type": "string",
"description": "Users can perform a Get on Azure-AsyncOperation to get the status of their delete Zone operations."
},
"status": {
"type": "string",
"enum": [
"InProgress",
"Succeeded",
"Failed"
],
"x-ms-enum": {
"name": "OperationStatus",
"modelAsString": false
}
},
"statusCode": {
"type": "string",
"enum": [
"Continue",
"SwitchingProtocols",
"OK",
"Created",
"Accepted",
"NonAuthoritativeInformation",
"NoContent",
"ResetContent",
"PartialContent",
"MultipleChoices",
"Ambiguous",
"MovedPermanently",
"Moved",
"Found",
"Redirect",
"SeeOther",
"RedirectMethod",
"NotModified",
"UseProxy",
"Unused",
"TemporaryRedirect",
"RedirectKeepVerb",
"BadRequest",
"Unauthorized",
"PaymentRequired",
"Forbidden",
"NotFound",
"MethodNotAllowed",
"NotAcceptable",
"ProxyAuthenticationRequired",
"RequestTimeout",
"Conflict",
"Gone",
"LengthRequired",
"PreconditionFailed",
"RequestEntityTooLarge",
"RequestUriTooLong",
"UnsupportedMediaType",
"RequestedRangeNotSatisfiable",
"ExpectationFailed",
"UpgradeRequired",
"InternalServerError",
"NotImplemented",
"BadGateway",
"ServiceUnavailable",
"GatewayTimeout",
"HttpVersionNotSupported"
],
"x-ms-enum": {
"name": "HttpStatusCode",
"modelAsString": false
}
},
"requestId": {
"type": "string"
}
},
"description": "The response to a Zone Delete operation."
},
"ZoneListResult": {
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/definitions/Zone"
},
"description": "Information about the DNS zones."
},
"nextLink": {
"type": "string",
"description": "The continuation token for the next page of results."
}
},
"description": "The response to a Zone List or ListAll operation."
},
"Resource": {
"x-ms-azure-resource": true,
"properties": {
"id": {
"readOnly": true,
"type": "string",
"description": "Resource ID."
},
"name": {
"readOnly": true,
"type": "string",
"description": "Resource name."
},
"type": {
"readOnly": true,
"type": "string",
"description": "Resource type."
},
"location": {
"type": "string",
"description": "Resource location."
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "Resource tags."
}
},
"required": [
"location"
]
},
"SubResource": {
"properties": {
"id": {
"type": "string",
"description": "Resource Id."
}
},
"x-ms-external": true
},
"CloudError": {
"type": "object",
"properties": { "error": { "$ref": "#/definitions/CloudErrorBody" } },
"x-ms-external": true
},
"CloudErrorBody": {
"type": "object",
"properties": {
"code": { "type": "string" },
"message": { "type": "string" },
"target": { "type": "string" },
"details": {
"type": "array",
"items": { "$ref": "#/definitions/CloudErrorBody" }
}
},
"x-ms-external": true
}
},
"parameters": {
"SubscriptionIdParameter": {
"name": "subscriptionId",
"in": "path",
"required": true,
"type": "string",
"description": "Specifies the Azure subscription ID, which uniquely identifies the Microsoft Azure subscription."
},
"ApiVersionParameter": {
"name": "api-version",
"in": "query",
"required": true,
"type": "string",
"description": "Specifies the API version."
}
}
}
| {
"pile_set_name": "Github"
} |
/**
* @license
* Copyright (c) 2016 The Polymer Project Authors. All rights reserved.
* This code may only be used under the BSD style license found at
* http://polymer.github.io/LICENSE.txt
* The complete set of authors may be found at
* http://polymer.github.io/AUTHORS.txt
* The complete set of contributors may be found at
* http://polymer.github.io/CONTRIBUTORS.txt
* Code distributed by Google as part of the polymer project is also
* subject to an additional IP rights grant found at
* http://polymer.github.io/PATENTS.txt
*/
import * as chalk from 'chalk';
import {Analyzer} from '../core/analyzer';
import {ParsedDocument} from '../parser/document';
import {underlineCode} from '../warning/code-printer';
import {Analysis} from './analysis';
import {comparePositionAndRange, isPositionInsideRange, SourceRange} from './source-range';
import stable = require('stable');
import {ResolvedUrl} from './url';
import {UrlResolver} from '../index';
export interface WarningInit {
readonly message: string;
readonly sourceRange: SourceRange;
readonly severity: Severity;
readonly code: string;
readonly parsedDocument: ParsedDocument;
readonly fix?: Edit;
readonly actions?: ReadonlyArray<Action>;
}
export class Warning {
readonly code: string;
readonly message: string;
readonly sourceRange: SourceRange;
readonly severity: Severity;
/**
* If the problem has a single automatic fix, this is it.
*
* Whether and how much something is 'automatic' can be a bit tricky to
* delineate. Roughly speaking, if 99% of the time the change solves the
* issue completely then it should go in `fix`.
*/
readonly fix: Edit|undefined;
/**
* Other actions that could be taken in response to this warning.
*
* Each action is separate and they may be mutually exclusive. In the case
* of edit actions they often are.
*/
readonly actions: ReadonlyArray<Action>|undefined = undefined;
private readonly _parsedDocument: ParsedDocument;
constructor(init: WarningInit) {
({
message: this.message,
sourceRange: this.sourceRange,
severity: this.severity,
code: this.code,
parsedDocument: this._parsedDocument,
} = init);
this.fix = init.fix;
this.actions = init.actions;
if (!this.sourceRange) {
throw new Error(
`Attempted to construct a ${this.code} ` +
`warning without a source range.`);
}
if (!this._parsedDocument) {
throw new Error(
`Attempted to construct a ${this.code} ` +
`warning without a parsed document.`);
}
}
toString(options: Partial<WarningStringifyOptions> = {}): string {
const opts:
WarningStringifyOptions = {...defaultPrinterOptions, ...options};
const colorize = opts.color ? this._severityToColorFunction(this.severity) :
(s: string) => s;
const severity = this._severityToString(colorize);
let result = '';
if (options.verbosity !== 'one-line') {
const underlined =
underlineCode(this.sourceRange, this._parsedDocument, colorize);
if (underlined) {
result += underlined;
}
if (options.verbosity === 'code-only') {
return result;
}
result += '\n\n';
}
let file: string = this.sourceRange.file;
if (opts.resolver) {
file = opts.resolver.relative(this.sourceRange.file);
}
result += `${file}(${this.sourceRange.start.line + 1},${
this.sourceRange.start.column +
1}) ${severity} [${this.code}] - ${this.message}\n`;
return result;
}
private _severityToColorFunction(severity: Severity) {
switch (severity) {
case Severity.ERROR:
return chalk.red;
case Severity.WARNING:
return chalk.yellow;
case Severity.INFO:
return chalk.green;
default:
const never: never = severity;
throw new Error(
`Unknown severity value - ${never}` +
` - encountered while printing warning.`);
}
}
private _severityToString(colorize: (s: string) => string) {
switch (this.severity) {
case Severity.ERROR:
return colorize('error');
case Severity.WARNING:
return colorize('warning');
case Severity.INFO:
return colorize('info');
default:
const never: never = this.severity;
throw new Error(
`Unknown severity value - ${never} - ` +
`encountered while printing warning.`);
}
}
toJSON() {
return {
code: this.code,
message: this.message,
severity: this.severity,
sourceRange: this.sourceRange,
};
}
}
export enum Severity {
ERROR,
WARNING,
INFO
}
// TODO(rictic): can we get rid of this class entirely?
export class WarningCarryingException extends Error {
readonly warning: Warning;
constructor(warning: Warning) {
super(warning.message);
this.warning = warning;
}
}
export type Verbosity = 'one-line'|'full'|'code-only';
export interface WarningStringifyOptions {
readonly verbosity: Verbosity;
readonly color: boolean;
/**
* If given, we will use resolver.relative to get a relative path
* to the reported file.
*/
readonly resolver?: UrlResolver;
}
const defaultPrinterOptions = {
verbosity: 'full' as 'full',
color: true
};
export type Action = EditAction|{
/** To ensure that type safe code actually checks for the action kind. */
kind: 'never';
};
/**
* An EditAction is like a fix, only it's not applied automatically when the
* user runs `polymer lint --fix`. Often this is because it's less safe to
* apply automatically, and there may be caveats, or multiple ways to resolve
* the warning.
*
* For example, a change to an element that updates it to no longer use a
* deprecated feature, but that involves a change in the element's API should
* not be a fix, but should instead be an EditAction.
*/
export interface EditAction {
kind: 'edit';
/**
* A unique string code for the edit action. Useful so that the user can
* request that all actions with a given code should be applied.
*/
code: string;
/**
* A short description of the change, noting caveats and important information
* for the user.
*/
description: string;
edit: Edit;
}
/**
* Represents an action for replacing a range in a document with some text.
*
* This is sufficient to represent all operations on text files, including
* inserting and deleting text (using empty ranges or empty replacement
* text, respectively).
*/
export interface Replacement {
readonly range: SourceRange;
readonly replacementText: string;
}
/**
* A set of replacements that must all be applied as a single atomic unit.
*/
export type Edit = ReadonlyArray<Replacement>;
export interface EditResult {
/** The edits that had no conflicts, and are reflected in editedFiles. */
appliedEdits: Edit[];
/** Edits that could not be applied due to overlapping ranges. */
incompatibleEdits: Edit[];
/** A map from urls to their new contents. */
editedFiles: Map<ResolvedUrl, string>;
}
/**
* Takes the given edits and, provided there are no overlaps, applies them to
* the contents loadable from the given loader.
*
* If there are overlapping edits, then edits earlier in the array get priority
* over later ones.
*/
export async function applyEdits(
edits: Edit[],
loader: (url: ResolvedUrl) =>
Promise<ParsedDocument<any, any>>): Promise<EditResult> {
const result: EditResult = {
appliedEdits: [],
incompatibleEdits: [],
editedFiles: new Map()
};
const replacementsByFile = new Map<ResolvedUrl, Replacement[]>();
for (const edit of edits) {
if (canApply(edit, replacementsByFile)) {
result.appliedEdits.push(edit);
} else {
result.incompatibleEdits.push(edit);
}
}
for (const [file, replacements] of replacementsByFile) {
const document = await loader(file);
let contents = document.contents;
/**
* This is the important bit. We know that none of the replacements overlap,
* so in order for their source ranges in the file to all be valid at the
* time we apply them, we simply need to apply them starting from the end
* of the document and working backwards to the beginning.
*
* To preserve ordering of insertions to the same position, we use a stable
* sort.
*/
stable.inplace(replacements, (a, b) => {
const leftEdgeComp =
comparePositionAndRange(b.range.start, a.range, true);
if (leftEdgeComp !== 0) {
return leftEdgeComp;
}
return comparePositionAndRange(b.range.end, a.range, false);
});
for (const replacement of replacements) {
const offsets = document.sourceRangeToOffsets(replacement.range);
contents = contents.slice(0, offsets[0]) + replacement.replacementText +
contents.slice(offsets[1]);
}
result.editedFiles.set(file, contents);
}
return result;
}
/**
* We can apply an edit if none of its replacements overlap with any accepted
* replacement.
*/
function canApply(
edit: Edit, replacements: Map<ResolvedUrl, Replacement[]>): boolean {
for (let i = 0; i < edit.length; i++) {
const replacement = edit[i];
const fileLocalReplacements =
replacements.get(replacement.range.file) || [];
// TODO(rictic): binary search
for (const acceptedReplacement of fileLocalReplacements) {
if (!areReplacementsCompatible(replacement, acceptedReplacement)) {
return false;
}
}
// Also check consistency between multiple replacements in this edit.
for (let j = 0; j < i; j++) {
const acceptedReplacement = edit[j];
if (!areReplacementsCompatible(replacement, acceptedReplacement)) {
return false;
}
}
}
// Ok, we can be applied to the replacements, so add our replacements in.
for (const replacement of edit) {
if (!replacements.has(replacement.range.file)) {
replacements.set(replacement.range.file, [replacement]);
} else {
const fileReplacements = replacements.get(replacement.range.file)!;
fileReplacements.push(replacement);
}
}
return true;
}
function areReplacementsCompatible(a: Replacement, b: Replacement) {
if (a.range.file !== b.range.file) {
return true;
}
if (areRangesEqual(a.range, b.range)) {
// Equal ranges are compatible if the ranges are empty (i.e. the edit is an
// insertion, not a replacement).
return (
a.range.start.column === a.range.end.column &&
a.range.start.line === a.range.end.line);
}
return !(
isPositionInsideRange(a.range.start, b.range, false) ||
isPositionInsideRange(a.range.end, b.range, false) ||
isPositionInsideRange(b.range.start, a.range, false) ||
isPositionInsideRange(b.range.end, a.range, false));
}
function areRangesEqual(a: SourceRange, b: SourceRange) {
return a.start.line === b.start.line && a.start.column === b.start.column &&
a.end.line === b.end.line && a.end.column === b.end.column;
}
export function makeParseLoader(analyzer: Analyzer, analysis?: Analysis) {
return async (url: ResolvedUrl) => {
if (analysis) {
const cachedResult = analysis.getDocument(url);
if (cachedResult.successful) {
return cachedResult.value.parsedDocument;
}
}
const result = (await analyzer.analyze([url])).getDocument(url);
if (result.successful) {
return result.value.parsedDocument;
}
let message = '';
if (result.error) {
message = result.error.message;
}
throw new Error(`Cannot load file at: ${JSON.stringify(url)}: ${message}`);
};
}
| {
"pile_set_name": "Github"
} |
/*
boost/numeric/odeint/stepper/detail/controlled_adams_bashforth_moulton.hpp
[begin_description]
Implemetation of an controlled adams bashforth moulton stepper.
[end_description]
Copyright 2017 Valentin Noah Hartmann
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or
copy at http://www.boost.org/LICENSE_1_0.txt)
*/
#ifndef BOOST_NUMERIC_ODEINT_STEPPER_CONTROLLED_ADAMS_BASHFORTH_MOULTON_HPP_INCLUDED
#define BOOST_NUMERIC_ODEINT_STEPPER_CONTROLLED_ADAMS_BASHFORTH_MOULTON_HPP_INCLUDED
#include <boost/numeric/odeint/stepper/stepper_categories.hpp>
#include <boost/numeric/odeint/stepper/controlled_step_result.hpp>
#include <boost/numeric/odeint/stepper/adaptive_adams_bashforth_moulton.hpp>
#include <boost/numeric/odeint/stepper/detail/pid_step_adjuster.hpp>
#include <boost/numeric/odeint/util/unwrap_reference.hpp>
#include <boost/numeric/odeint/util/is_resizeable.hpp>
#include <boost/numeric/odeint/util/resizer.hpp>
#include <boost/numeric/odeint/util/copy.hpp>
#include <boost/numeric/odeint/util/bind.hpp>
#include <iostream>
namespace boost {
namespace numeric {
namespace odeint {
template<
size_t MaxOrder,
class State,
class Value = double,
class Algebra = typename algebra_dispatcher< State >::algebra_type
>
class default_order_adjuster
{
public:
typedef State state_type;
typedef Value value_type;
typedef state_wrapper< state_type > wrapped_state_type;
typedef Algebra algebra_type;
default_order_adjuster( const algebra_type &algebra = algebra_type() )
: m_algebra( algebra )
{};
size_t adjust_order(size_t order, size_t init, boost::array<wrapped_state_type, 4> &xerr)
{
using std::abs;
value_type errc = abs(m_algebra.norm_inf(xerr[2].m_v));
value_type errm1 = 3*errc;
value_type errm2 = 3*errc;
if(order > 2)
{
errm2 = abs(m_algebra.norm_inf(xerr[0].m_v));
}
if(order >= 2)
{
errm1 = abs(m_algebra.norm_inf(xerr[1].m_v));
}
size_t o_new = order;
if(order == 2 && errm1 <= 0.5*errc)
{
o_new = order - 1;
}
else if(order > 2 && errm2 < errc && errm1 < errc)
{
o_new = order - 1;
}
if(init < order)
{
return order+1;
}
else if(o_new == order - 1)
{
return order-1;
}
else if(order <= MaxOrder)
{
value_type errp = abs(m_algebra.norm_inf(xerr[3].m_v));
if(order > 1 && errm1 < errc && errp)
{
return order-1;
}
else if(order < MaxOrder && errp < (0.5-0.25*order/MaxOrder) * errc)
{
return order+1;
}
}
return order;
};
private:
algebra_type m_algebra;
};
template<
class ErrorStepper,
class StepAdjuster = detail::pid_step_adjuster< typename ErrorStepper::state_type,
typename ErrorStepper::value_type,
typename ErrorStepper::deriv_type,
typename ErrorStepper::time_type,
typename ErrorStepper::algebra_type,
typename ErrorStepper::operations_type,
detail::H211PI
>,
class OrderAdjuster = default_order_adjuster< ErrorStepper::order_value,
typename ErrorStepper::state_type,
typename ErrorStepper::value_type,
typename ErrorStepper::algebra_type
>,
class Resizer = initially_resizer
>
class controlled_adams_bashforth_moulton
{
public:
typedef ErrorStepper stepper_type;
static const typename stepper_type::order_type order_value = stepper_type::order_value;
typedef typename stepper_type::state_type state_type;
typedef typename stepper_type::value_type value_type;
typedef typename stepper_type::deriv_type deriv_type;
typedef typename stepper_type::time_type time_type;
typedef typename stepper_type::algebra_type algebra_type;
typedef typename stepper_type::operations_type operations_type;
typedef Resizer resizer_type;
typedef StepAdjuster step_adjuster_type;
typedef OrderAdjuster order_adjuster_type;
typedef controlled_stepper_tag stepper_category;
typedef typename stepper_type::wrapped_state_type wrapped_state_type;
typedef typename stepper_type::wrapped_deriv_type wrapped_deriv_type;
typedef boost::array< wrapped_state_type , 4 > error_storage_type;
typedef typename stepper_type::coeff_type coeff_type;
typedef controlled_adams_bashforth_moulton< ErrorStepper , StepAdjuster , OrderAdjuster , Resizer > controlled_stepper_type;
controlled_adams_bashforth_moulton(step_adjuster_type step_adjuster = step_adjuster_type())
:m_stepper(),
m_dxdt_resizer(), m_xerr_resizer(), m_xnew_resizer(),
m_step_adjuster( step_adjuster ), m_order_adjuster()
{};
template< class ExplicitStepper, class System >
void initialize(ExplicitStepper stepper, System system, state_type &inOut, time_type &t, time_type dt)
{
m_stepper.initialize(stepper, system, inOut, t, dt);
};
template< class System >
void initialize(System system, state_type &inOut, time_type &t, time_type dt)
{
m_stepper.initialize(system, inOut, t, dt);
};
template< class ExplicitStepper, class System >
void initialize_controlled(ExplicitStepper stepper, System system, state_type &inOut, time_type &t, time_type &dt)
{
reset();
coeff_type &coeff = m_stepper.coeff();
m_dxdt_resizer.adjust_size( inOut , detail::bind( &controlled_stepper_type::template resize_dxdt_impl< state_type > , detail::ref( *this ) , detail::_1 ) );
controlled_step_result res = fail;
for( size_t i=0 ; i<order_value; ++i )
{
do
{
res = stepper.try_step( system, inOut, t, dt );
}
while(res != success);
system( inOut , m_dxdt.m_v , t );
coeff.predict(t-dt, dt);
coeff.do_step(m_dxdt.m_v);
coeff.confirm();
if(coeff.m_eo < order_value)
{
++coeff.m_eo;
}
}
}
template< class System >
controlled_step_result try_step(System system, state_type & inOut, time_type &t, time_type &dt)
{
m_xnew_resizer.adjust_size( inOut , detail::bind( &controlled_stepper_type::template resize_xnew_impl< state_type > , detail::ref( *this ) , detail::_1 ) );
controlled_step_result res = try_step(system, inOut, t, m_xnew.m_v, dt);
if(res == success)
{
boost::numeric::odeint::copy( m_xnew.m_v , inOut);
}
return res;
};
template< class System >
controlled_step_result try_step(System system, const state_type & in, time_type &t, state_type & out, time_type &dt)
{
m_xerr_resizer.adjust_size( in , detail::bind( &controlled_stepper_type::template resize_xerr_impl< state_type > , detail::ref( *this ) , detail::_1 ) );
m_dxdt_resizer.adjust_size( in , detail::bind( &controlled_stepper_type::template resize_dxdt_impl< state_type > , detail::ref( *this ) , detail::_1 ) );
m_stepper.do_step_impl(system, in, t, out, dt, m_xerr[2].m_v);
coeff_type &coeff = m_stepper.coeff();
time_type dtPrev = dt;
dt = m_step_adjuster.adjust_stepsize(coeff.m_eo, dt, m_xerr[2].m_v, out, m_stepper.dxdt() );
if(dt / dtPrev >= step_adjuster_type::threshold())
{
system(out, m_dxdt.m_v, t+dtPrev);
coeff.do_step(m_dxdt.m_v);
coeff.confirm();
t += dtPrev;
size_t eo = coeff.m_eo;
// estimate errors for next step
double factor = 1;
algebra_type m_algebra;
m_algebra.for_each2(m_xerr[2].m_v, coeff.phi[1][eo].m_v,
typename operations_type::template scale_sum1<double>(factor*dt*(coeff.gs[eo])));
if(eo > 1)
{
m_algebra.for_each2(m_xerr[1].m_v, coeff.phi[1][eo-1].m_v,
typename operations_type::template scale_sum1<double>(factor*dt*(coeff.gs[eo-1])));
}
if(eo > 2)
{
m_algebra.for_each2(m_xerr[0].m_v, coeff.phi[1][eo-2].m_v,
typename operations_type::template scale_sum1<double>(factor*dt*(coeff.gs[eo-2])));
}
if(eo < order_value && coeff.m_eo < coeff.m_steps_init-1)
{
m_algebra.for_each2(m_xerr[3].m_v, coeff.phi[1][eo+1].m_v,
typename operations_type::template scale_sum1<double>(factor*dt*(coeff.gs[eo+1])));
}
// adjust order
coeff.m_eo = m_order_adjuster.adjust_order(coeff.m_eo, coeff.m_steps_init-1, m_xerr);
return success;
}
else
{
return fail;
}
};
void reset() { m_stepper.reset(); };
private:
template< class StateType >
bool resize_dxdt_impl( const StateType &x )
{
return adjust_size_by_resizeability( m_dxdt, x, typename is_resizeable<deriv_type>::type() );
};
template< class StateType >
bool resize_xerr_impl( const StateType &x )
{
bool resized( false );
for(size_t i=0; i<m_xerr.size(); ++i)
{
resized |= adjust_size_by_resizeability( m_xerr[i], x, typename is_resizeable<state_type>::type() );
}
return resized;
};
template< class StateType >
bool resize_xnew_impl( const StateType &x )
{
return adjust_size_by_resizeability( m_xnew, x, typename is_resizeable<state_type>::type() );
};
stepper_type m_stepper;
wrapped_deriv_type m_dxdt;
error_storage_type m_xerr;
wrapped_state_type m_xnew;
resizer_type m_dxdt_resizer;
resizer_type m_xerr_resizer;
resizer_type m_xnew_resizer;
step_adjuster_type m_step_adjuster;
order_adjuster_type m_order_adjuster;
};
} // odeint
} // numeric
} // boost
#endif | {
"pile_set_name": "Github"
} |
module WeixinRailsMiddleware
module ReplyWeixinMessageHelper
# e.g.
# reply_text_message(@weixin_message.ToUserName, @weixin_message.FromUserName, "Your Message: #{@weixin_message.Content}")
# Or reply_text_message("Your Message: #{@weixin_message.Content}")
def reply_text_message(from=nil, to=nil, content)
message = TextReplyMessage.new
message.FromUserName = from || @weixin_message.ToUserName
message.ToUserName = to || @weixin_message.FromUserName
message.Content = content
encrypt_message message.to_xml
end
def generate_music(title, desc, music_url, hq_music_url)
music = Music.new
music.Title = title
music.Description = desc
music.MusicUrl = music_url
music.HQMusicUrl = hq_music_url
music
end
# music = generate_music
def reply_music_message(from=nil, to=nil, music)
message = MusicReplyMessage.new
message.FromUserName = from || @weixin_message.ToUserName
message.ToUserName = to || @weixin_message.FromUserName
message.Music = music
encrypt_message message.to_xml
end
def generate_article(title, desc, pic_url, link_url)
item = Article.new
item.Title = title
item.Description = desc
item.PicUrl = pic_url
item.Url = link_url
item
end
# articles = [generate_article]
def reply_news_message(from=nil, to=nil, articles)
message = NewsReplyMessage.new
message.FromUserName = from || @weixin_message.ToUserName
message.ToUserName = to || @weixin_message.FromUserName
message.Articles = articles
message.ArticleCount = articles.count
encrypt_message message.to_xml
end
def generate_video(media_id, desc, title)
video = Video.new
video.MediaId = media_id
video.Title = title
video.Description = desc
video
end
# <xml>
# <ToUserName><![CDATA[toUser]]></ToUserName>
# <FromUserName><![CDATA[fromUser]]></FromUserName>
# <CreateTime>12345678</CreateTime>
# <MsgType><![CDATA[video]]></MsgType>
# <Video>
# <MediaId><![CDATA[media_id]]></MediaId>
# <Title><![CDATA[title]]></Title>
# <Description><![CDATA[description]]></Description>
# </Video>
# </xml>
def reply_video_message(from=nil, to=nil, video)
message = VideoReplyMessage.new
message.FromUserName = from || @weixin_message.ToUserName
message.ToUserName = to || @weixin_message.FromUserName
message.Video = video
encrypt_message message.to_xml
end
def generate_voice(media_id)
voice = Voice.new
voice.MediaId = media_id
voice
end
def reply_voice_message(from=nil, to=nil, voice)
message = VoiceReplyMessage.new
message.FromUserName = from || @weixin_message.ToUserName
message.ToUserName = to || @weixin_message.FromUserName
message.Voice = voice
encrypt_message message.to_xml
end
def generate_image(media_id)
image = Image.new
image.MediaId = media_id
image
end
def reply_image_message(from=nil, to=nil, image)
message = ImageReplyMessage.new
message.FromUserName = from || @weixin_message.ToUserName
message.ToUserName = to || @weixin_message.FromUserName
message.Image = image
encrypt_message message.to_xml
end
# 指定会话接入的客服账号
def generate_kf_trans_info(kf_account)
trans_info = KfTransInfo.new
trans_info.KfAccount = kf_account
trans_info
end
# 消息转发到多客服
# 消息转发到指定客服
def reply_transfer_customer_service_message(from=nil, to=nil, kf_account=nil)
if kf_account.blank?
message = TransferCustomerServiceReplyMessage.new
else
message = TransferCustomerServiceWithTransInfoReplyMessage.new
message.TransInfo = generate_kf_trans_info(kf_account)
end
message.FromUserName = from || @weixin_message.ToUserName
message.ToUserName = to || @weixin_message.FromUserName
encrypt_message message.to_xml
end
private
def encrypt_message(msg_xml)
return msg_xml if !@is_encrypt
# 加密回复的XML
encrypt_xml = Prpcrypt.encrypt(@weixin_public_account.aes_key, msg_xml, @weixin_public_account.app_id).gsub("\n","")
# 标准的回包
generate_encrypt_message(encrypt_xml)
end
def generate_encrypt_message(encrypt_xml)
msg = EncryptMessage.new
msg.Encrypt = encrypt_xml
msg.TimeStamp = Time.now.to_i.to_s
msg.Nonce = SecureRandom.hex(8)
msg.MsgSignature = generate_msg_signature(encrypt_xml, msg)
msg.to_xml
end
# dev_msg_signature=sha1(sort(token、timestamp、nonce、msg_encrypt))
# 生成企业签名
def generate_msg_signature(encrypt_msg, msg)
sort_params = [encrypt_msg, @weixin_adapter.current_weixin_token,
msg.TimeStamp, msg.Nonce].sort.join
Digest::SHA1.hexdigest(sort_params)
end
end
end
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 1997, 2004, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package java.rmi.registry;
import java.rmi.RemoteException;
import java.rmi.UnknownHostException;
/**
* <code>RegistryHandler</code> is an interface used internally by the RMI
* runtime in previous implementation versions. It should never be accessed
* by application code.
*
* @author Ann Wollrath
* @since 1.1
* @deprecated no replacement
*/
@Deprecated
public interface RegistryHandler {
/**
* Returns a "stub" for contacting a remote registry
* on the specified host and port.
*
* @deprecated no replacement. As of the Java 2 platform v1.2, RMI no
* longer uses the <code>RegistryHandler</code> to obtain the registry's
* stub.
* @param host name of remote registry host
* @param port remote registry port
* @return remote registry stub
* @throws RemoteException if a remote error occurs
* @throws UnknownHostException if unable to resolve given hostname
*/
@Deprecated
Registry registryStub(String host, int port)
throws RemoteException, UnknownHostException;
/**
* Constructs and exports a Registry on the specified port.
* The port must be non-zero.
*
* @deprecated no replacement. As of the Java 2 platform v1.2, RMI no
* longer uses the <code>RegistryHandler</code> to obtain the registry's
* implementation.
* @param port port to export registry on
* @return registry stub
* @throws RemoteException if a remote error occurs
*/
@Deprecated
Registry registryImpl(int port) throws RemoteException;
}
| {
"pile_set_name": "Github"
} |
#include "pch.h"
| {
"pile_set_name": "Github"
} |
'use strict'
module.exports = function isArrayLike (o) {
return o && o.length !== undefined;
};
| {
"pile_set_name": "Github"
} |
=================================
Voltage Transformer Configuration
=================================
A prime component of electrical power is voltage.
The AC line frequency is the heartbeat of the IoTaWatt.
A reliable and accurate AC voltage reference is very important.
You should have installed the device with a 9 Volt AC voltage reference
transformer (VT) plugged into the channel zero power jack.
If your initial configuration has this channel pre-configured,
your LED will be glowing green because it's rhythmically sampling that voltage.
Various transformer models produce different voltages,
and it's important to insure that the VT specified
matches the model that you have installed.
To do this, select the inputs button in the Setup dropdown menu.
.. image:: pics/VTinputList.png
:scale: 60 %
:align: center
:alt: Setup Inputs List
A list of all of the inputs will be displayed.
The first entry will be input 0 and a default VT should be configured.
Check to see if it's the same as your VT model.
It's OK to unplug the VT to check the model number printed on it.
If your VT model doesn't match the model that is configured, you can easily change it.
Click on the input channel 0 button on the left.
.. image:: pics/VTconfig.png
:scale: 60 %
:align: center
:alt: Configure VT Menu
As you can see, the display changes to reveal the details of the input_0 configuration.
.. image:: pics/VTselect.png
:scale: 60 %
:align: center
:alt: Select VT Image
VT Model Selection
------------------
If your make and model is listed, select it from the list.
At this point, you can just click |save| and the standard
calibration for your VT will be used.
That calibration should be good for all but the most discerning users.
If you have access to a good voltmeter or other reliable
high accuracy voltage reference,
you can fine tune with the calibration procedure below, but for average users,
you should be good to go on to the next step Adding Power Channel CTs
If your VT wasn't listed in the dropdown above,
the generic entry is a reasonable starting point
that will get you in the ball park for your 9-12Vac adapter.
If your country is 230V or 240V select "generic240V".
Now you must perform the `Voltage Calibration`_ procedure below.
TDC DA-10-09 model ambiguity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are two different voltage transformers available with the model designation TDC DA-10-09.
These models are quite different and need to be properly configured.
.. figure:: pics/TDC-DA-09-10.jpg
:scale: 10 %
:align: left
:alt: TDC DA-10-09
use model: TDC DA-10-09
.. figure:: pics/TDC-DA-09-10-E6.jpg
:scale: 10 %
:align: center
:alt: TDC DA-10-09-E6
use model: TDC DA-10-09-E6
Voltage Calibration
-------------------
Again, if you are using one of the standard voltage transformers from
the tables, this step is optional.
Repeated random tests on the standard US and
Euro transformers yield excellent calibration right out of the box.
You will need a halfway decent voltage reference for this step.
If you don't have a decent true RMS voltmeter and can't borrow one,
go out and get a Kill-a-Watt.
They cost less than $20 (some libraries lend them out) and
I've found their voltage readings are usually accurate.
click |calibrate|
.. image:: pics/VTcalibrate.png
:scale: 60 %
:align: center
:alt: Calibrate VT Menu
Follow the instructions on the page. Increase or decrease the "cal" factor
until the voltage shown settles down and is a pretty
good match with your reference meter.
It's not possible to match exactly. 0.2V in a
120V installation is 0.2% variation.
A good meter accuracy is 1% at best. Just try to get the
two to dwell around the same set of fractional digits.
As instructed on the page, click save to record the calibration factor.
The new calibration factor will take effect immediately.
Click the Status menu button to display the voltage:
.. image:: pics/VTstatus.png
:scale: 60 %
:align: center
:alt: VT Status
Wait a few seconds then check that the voltage
displayed is still in the ball park.
If not, repeat the calibration procedure.
Once calibration is complete and verified,
you will not need to do it again unless you change your VT transformer.
The IoTaWatt has a very accurate internal calibration reference and will maintain
its accuracy indefinitely. You should have no further need for the voltmeter.
Now the device is ready for the next
step `Configuring Power Channel CTs <CTconfig.html>`_
.. |save| image:: pics/SaveButton.png
:scale: 50 %
:alt: **Save**
.. |calibrate| image:: pics/CalibrateButton.png
:scale: 50 %
:alt: **Calibrate**
| {
"pile_set_name": "Github"
} |
package com.atguigu.springcloud;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
@SpringBootApplication
@EnableEurekaClient //本服务启动后会自动注册进eureka服务中
@EnableDiscoveryClient //服务发现
public class DeptProvider8001_App
{
public static void main(String[] args)
{
SpringApplication.run(DeptProvider8001_App.class, args);
}
}
| {
"pile_set_name": "Github"
} |
//>>built
define(
//begin v1.x content
({
widgetLabel: "Wsadowe sprawdzanie pisowni",
unfound: "Nie znaleziono",
skip: "Pomiń",
skipAll: "Pomiń wszystko",
toDic: "Dodaj do słownika",
suggestions: "Propozycje",
replace: "Zastąp",
replaceWith: "Zastąp przez",
replaceAll: "Zastąp wszystko",
cancel: "Anuluj",
msg: "Nie znaleziono błędów pisowni",
iSkip: "Pomiń tę pozycję",
iSkipAll: "Pomiń wszystkie pozycje podobne do tej",
iMsg: "Brak propozycji pisowni"
})
//end v1.x content
);
| {
"pile_set_name": "Github"
} |
from plotly.basedatatypes import BaseTraceHierarchyType as _BaseTraceHierarchyType
import copy as _copy
class Marker(_BaseTraceHierarchyType):
# class properties
# --------------------
_parent_path_str = "scattercarpet"
_path_str = "scattercarpet.marker"
_valid_props = {
"autocolorscale",
"cauto",
"cmax",
"cmid",
"cmin",
"color",
"coloraxis",
"colorbar",
"colorscale",
"colorsrc",
"gradient",
"line",
"maxdisplayed",
"opacity",
"opacitysrc",
"reversescale",
"showscale",
"size",
"sizemin",
"sizemode",
"sizeref",
"sizesrc",
"symbol",
"symbolsrc",
}
# autocolorscale
# --------------
@property
def autocolorscale(self):
"""
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`marker.colorscale`. Has an effect only if in `marker.color`is
set to a numerical array. In case `colorscale` is unspecified
or `autocolorscale` is true, the default palette will be
chosen according to whether numbers in the `color` array are
all positive, all negative or mixed.
The 'autocolorscale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["autocolorscale"]
@autocolorscale.setter
def autocolorscale(self, val):
self["autocolorscale"] = val
# cauto
# -----
@property
def cauto(self):
"""
Determines whether or not the color domain is computed with
respect to the input data (here in `marker.color`) or the
bounds set in `marker.cmin` and `marker.cmax` Has an effect
only if in `marker.color`is set to a numerical array. Defaults
to `false` when `marker.cmin` and `marker.cmax` are set by the
user.
The 'cauto' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["cauto"]
@cauto.setter
def cauto(self, val):
self["cauto"] = val
# cmax
# ----
@property
def cmax(self):
"""
Sets the upper bound of the color domain. Has an effect only if
in `marker.color`is set to a numerical array. Value should have
the same units as in `marker.color` and if set, `marker.cmin`
must be set as well.
The 'cmax' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["cmax"]
@cmax.setter
def cmax(self, val):
self["cmax"] = val
# cmid
# ----
@property
def cmid(self):
"""
Sets the mid-point of the color domain by scaling `marker.cmin`
and/or `marker.cmax` to be equidistant to this point. Has an
effect only if in `marker.color`is set to a numerical array.
Value should have the same units as in `marker.color`. Has no
effect when `marker.cauto` is `false`.
The 'cmid' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["cmid"]
@cmid.setter
def cmid(self, val):
self["cmid"] = val
# cmin
# ----
@property
def cmin(self):
"""
Sets the lower bound of the color domain. Has an effect only if
in `marker.color`is set to a numerical array. Value should have
the same units as in `marker.color` and if set, `marker.cmax`
must be set as well.
The 'cmin' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["cmin"]
@cmin.setter
def cmin(self, val):
self["cmin"] = val
# color
# -----
@property
def color(self):
"""
Sets themarkercolor. It accepts either a specific color or an
array of numbers that are mapped to the colorscale relative to
the max and min values of the array or relative to
`marker.cmin` and `marker.cmax` if set.
The 'color' property is a color and may be specified as:
- A hex string (e.g. '#ff0000')
- An rgb/rgba string (e.g. 'rgb(255,0,0)')
- An hsl/hsla string (e.g. 'hsl(0,100%,50%)')
- An hsv/hsva string (e.g. 'hsv(0,100%,100%)')
- A named CSS color:
aliceblue, antiquewhite, aqua, aquamarine, azure,
beige, bisque, black, blanchedalmond, blue,
blueviolet, brown, burlywood, cadetblue,
chartreuse, chocolate, coral, cornflowerblue,
cornsilk, crimson, cyan, darkblue, darkcyan,
darkgoldenrod, darkgray, darkgrey, darkgreen,
darkkhaki, darkmagenta, darkolivegreen, darkorange,
darkorchid, darkred, darksalmon, darkseagreen,
darkslateblue, darkslategray, darkslategrey,
darkturquoise, darkviolet, deeppink, deepskyblue,
dimgray, dimgrey, dodgerblue, firebrick,
floralwhite, forestgreen, fuchsia, gainsboro,
ghostwhite, gold, goldenrod, gray, grey, green,
greenyellow, honeydew, hotpink, indianred, indigo,
ivory, khaki, lavender, lavenderblush, lawngreen,
lemonchiffon, lightblue, lightcoral, lightcyan,
lightgoldenrodyellow, lightgray, lightgrey,
lightgreen, lightpink, lightsalmon, lightseagreen,
lightskyblue, lightslategray, lightslategrey,
lightsteelblue, lightyellow, lime, limegreen,
linen, magenta, maroon, mediumaquamarine,
mediumblue, mediumorchid, mediumpurple,
mediumseagreen, mediumslateblue, mediumspringgreen,
mediumturquoise, mediumvioletred, midnightblue,
mintcream, mistyrose, moccasin, navajowhite, navy,
oldlace, olive, olivedrab, orange, orangered,
orchid, palegoldenrod, palegreen, paleturquoise,
palevioletred, papayawhip, peachpuff, peru, pink,
plum, powderblue, purple, red, rosybrown,
royalblue, rebeccapurple, saddlebrown, salmon,
sandybrown, seagreen, seashell, sienna, silver,
skyblue, slateblue, slategray, slategrey, snow,
springgreen, steelblue, tan, teal, thistle, tomato,
turquoise, violet, wheat, white, whitesmoke,
yellow, yellowgreen
- A number that will be interpreted as a color
according to scattercarpet.marker.colorscale
- A list or array of any of the above
Returns
-------
str|numpy.ndarray
"""
return self["color"]
@color.setter
def color(self, val):
self["color"] = val
# coloraxis
# ---------
@property
def coloraxis(self):
"""
Sets a reference to a shared color axis. References to these
shared color axes are "coloraxis", "coloraxis2", "coloraxis3",
etc. Settings for these shared color axes are set in the
layout, under `layout.coloraxis`, `layout.coloraxis2`, etc.
Note that multiple color scales can be linked to the same color
axis.
The 'coloraxis' property is an identifier of a particular
subplot, of type 'coloraxis', that may be specified as the string 'coloraxis'
optionally followed by an integer >= 1
(e.g. 'coloraxis', 'coloraxis1', 'coloraxis2', 'coloraxis3', etc.)
Returns
-------
str
"""
return self["coloraxis"]
@coloraxis.setter
def coloraxis(self, val):
self["coloraxis"] = val
# colorbar
# --------
@property
def colorbar(self):
"""
The 'colorbar' property is an instance of ColorBar
that may be specified as:
- An instance of :class:`plotly.graph_objs.scattercarpet.marker.ColorBar`
- A dict of string/value properties that will be passed
to the ColorBar constructor
Supported dict properties:
bgcolor
Sets the color of padded area.
bordercolor
Sets the axis line color.
borderwidth
Sets the width (in px) or the border enclosing
this color bar.
dtick
Sets the step in-between ticks on this axis.
Use with `tick0`. Must be a positive number, or
special strings available to "log" and "date"
axes. If the axis `type` is "log", then ticks
are set every 10^(n*dtick) where n is the tick
number. For example, to set a tick mark at 1,
10, 100, 1000, ... set dtick to 1. To set tick
marks at 1, 100, 10000, ... set dtick to 2. To
set tick marks at 1, 5, 25, 125, 625, 3125, ...
set dtick to log_10(5), or 0.69897000433. "log"
has several special values; "L<f>", where `f`
is a positive number, gives ticks linearly
spaced in value (but not position). For example
`tick0` = 0.1, `dtick` = "L0.5" will put ticks
at 0.1, 0.6, 1.1, 1.6 etc. To show powers of 10
plus small digits between, use "D1" (all
digits) or "D2" (only 2 and 5). `tick0` is
ignored for "D1" and "D2". If the axis `type`
is "date", then you must convert the time to
milliseconds. For example, to set the interval
between ticks to one day, set `dtick` to
86400000.0. "date" also has special values
"M<n>" gives ticks spaced by a number of
months. `n` must be a positive integer. To set
ticks on the 15th of every third month, set
`tick0` to "2000-01-15" and `dtick` to "M3". To
set ticks every 4 years, set `dtick` to "M48"
exponentformat
Determines a formatting rule for the tick
exponents. For example, consider the number
1,000,000,000. If "none", it appears as
1,000,000,000. If "e", 1e+9. If "E", 1E+9. If
"power", 1x10^9 (with 9 in a super script). If
"SI", 1G. If "B", 1B.
len
Sets the length of the color bar This measure
excludes the padding of both ends. That is, the
color bar length is this length minus the
padding on both ends.
lenmode
Determines whether this color bar's length
(i.e. the measure in the color variation
direction) is set in units of plot "fraction"
or in *pixels. Use `len` to set the value.
nticks
Specifies the maximum number of ticks for the
particular axis. The actual number of ticks
will be chosen automatically to be less than or
equal to `nticks`. Has an effect only if
`tickmode` is set to "auto".
outlinecolor
Sets the axis line color.
outlinewidth
Sets the width (in px) of the axis line.
separatethousands
If "true", even 4-digit integers are separated
showexponent
If "all", all exponents are shown besides their
significands. If "first", only the exponent of
the first tick is shown. If "last", only the
exponent of the last tick is shown. If "none",
no exponents appear.
showticklabels
Determines whether or not the tick labels are
drawn.
showtickprefix
If "all", all tick labels are displayed with a
prefix. If "first", only the first tick is
displayed with a prefix. If "last", only the
last tick is displayed with a suffix. If
"none", tick prefixes are hidden.
showticksuffix
Same as `showtickprefix` but for tick suffixes.
thickness
Sets the thickness of the color bar This
measure excludes the size of the padding, ticks
and labels.
thicknessmode
Determines whether this color bar's thickness
(i.e. the measure in the constant color
direction) is set in units of plot "fraction"
or in "pixels". Use `thickness` to set the
value.
tick0
Sets the placement of the first tick on this
axis. Use with `dtick`. If the axis `type` is
"log", then you must take the log of your
starting tick (e.g. to set the starting tick to
100, set the `tick0` to 2) except when
`dtick`=*L<f>* (see `dtick` for more info). If
the axis `type` is "date", it should be a date
string, like date data. If the axis `type` is
"category", it should be a number, using the
scale where each category is assigned a serial
number from zero in the order it appears.
tickangle
Sets the angle of the tick labels with respect
to the horizontal. For example, a `tickangle`
of -90 draws the tick labels vertically.
tickcolor
Sets the tick color.
tickfont
Sets the color bar's tick label font
tickformat
Sets the tick label formatting rule using d3
formatting mini-languages which are very
similar to those in Python. For numbers, see:
https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format
And for dates see:
https://github.com/d3/d3-time-
format#locale_format We add one item to d3's
date formatter: "%{n}f" for fractional seconds
with n digits. For example, *2016-10-13
09:15:23.456* with tickformat "%H~%M~%S.%2f"
would display "09~15~23.46"
tickformatstops
A tuple of :class:`plotly.graph_objects.scatter
carpet.marker.colorbar.Tickformatstop`
instances or dicts with compatible properties
tickformatstopdefaults
When used in a template (as layout.template.dat
a.scattercarpet.marker.colorbar.tickformatstopd
efaults), sets the default property values to
use for elements of
scattercarpet.marker.colorbar.tickformatstops
ticklen
Sets the tick length (in px).
tickmode
Sets the tick mode for this axis. If "auto",
the number of ticks is set via `nticks`. If
"linear", the placement of the ticks is
determined by a starting position `tick0` and a
tick step `dtick` ("linear" is the default
value if `tick0` and `dtick` are provided). If
"array", the placement of the ticks is set via
`tickvals` and the tick text is `ticktext`.
("array" is the default value if `tickvals` is
provided).
tickprefix
Sets a tick label prefix.
ticks
Determines whether ticks are drawn or not. If
"", this axis' ticks are not drawn. If
"outside" ("inside"), this axis' are drawn
outside (inside) the axis lines.
ticksuffix
Sets a tick label suffix.
ticktext
Sets the text displayed at the ticks position
via `tickvals`. Only has an effect if
`tickmode` is set to "array". Used with
`tickvals`.
ticktextsrc
Sets the source reference on Chart Studio Cloud
for ticktext .
tickvals
Sets the values at which ticks on this axis
appear. Only has an effect if `tickmode` is set
to "array". Used with `ticktext`.
tickvalssrc
Sets the source reference on Chart Studio Cloud
for tickvals .
tickwidth
Sets the tick width (in px).
title
:class:`plotly.graph_objects.scattercarpet.mark
er.colorbar.Title` instance or dict with
compatible properties
titlefont
Deprecated: Please use
scattercarpet.marker.colorbar.title.font
instead. Sets this color bar's title font. Note
that the title's font used to be set by the now
deprecated `titlefont` attribute.
titleside
Deprecated: Please use
scattercarpet.marker.colorbar.title.side
instead. Determines the location of color bar's
title with respect to the color bar. Note that
the title's location used to be set by the now
deprecated `titleside` attribute.
x
Sets the x position of the color bar (in plot
fraction).
xanchor
Sets this color bar's horizontal position
anchor. This anchor binds the `x` position to
the "left", "center" or "right" of the color
bar.
xpad
Sets the amount of padding (in px) along the x
direction.
y
Sets the y position of the color bar (in plot
fraction).
yanchor
Sets this color bar's vertical position anchor
This anchor binds the `y` position to the
"top", "middle" or "bottom" of the color bar.
ypad
Sets the amount of padding (in px) along the y
direction.
Returns
-------
plotly.graph_objs.scattercarpet.marker.ColorBar
"""
return self["colorbar"]
@colorbar.setter
def colorbar(self, val):
self["colorbar"] = val
# colorscale
# ----------
@property
def colorscale(self):
"""
Sets the colorscale. Has an effect only if in `marker.color`is
set to a numerical array. The colorscale must be an array
containing arrays mapping a normalized value to an rgb, rgba,
hex, hsl, hsv, or named color string. At minimum, a mapping for
the lowest (0) and highest (1) values are required. For
example, `[[0, 'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`. To
control the bounds of the colorscale in color space,
use`marker.cmin` and `marker.cmax`. Alternatively, `colorscale`
may be a palette name string of the following list: Greys,YlGnB
u,Greens,YlOrRd,Bluered,RdBu,Reds,Blues,Picnic,Rainbow,Portland
,Jet,Hot,Blackbody,Earth,Electric,Viridis,Cividis.
The 'colorscale' property is a colorscale and may be
specified as:
- A list of colors that will be spaced evenly to create the colorscale.
Many predefined colorscale lists are included in the sequential, diverging,
and cyclical modules in the plotly.colors package.
- A list of 2-element lists where the first element is the
normalized color level value (starting at 0 and ending at 1),
and the second item is a valid color string.
(e.g. [[0, 'green'], [0.5, 'red'], [1.0, 'rgb(0, 0, 255)']])
- One of the following named colorscales:
['aggrnyl', 'agsunset', 'algae', 'amp', 'armyrose', 'balance',
'blackbody', 'bluered', 'blues', 'blugrn', 'bluyl', 'brbg',
'brwnyl', 'bugn', 'bupu', 'burg', 'burgyl', 'cividis', 'curl',
'darkmint', 'deep', 'delta', 'dense', 'earth', 'edge', 'electric',
'emrld', 'fall', 'geyser', 'gnbu', 'gray', 'greens', 'greys',
'haline', 'hot', 'hsv', 'ice', 'icefire', 'inferno', 'jet',
'magenta', 'magma', 'matter', 'mint', 'mrybm', 'mygbm', 'oranges',
'orrd', 'oryel', 'peach', 'phase', 'picnic', 'pinkyl', 'piyg',
'plasma', 'plotly3', 'portland', 'prgn', 'pubu', 'pubugn', 'puor',
'purd', 'purp', 'purples', 'purpor', 'rainbow', 'rdbu', 'rdgy',
'rdpu', 'rdylbu', 'rdylgn', 'redor', 'reds', 'solar', 'spectral',
'speed', 'sunset', 'sunsetdark', 'teal', 'tealgrn', 'tealrose',
'tempo', 'temps', 'thermal', 'tropic', 'turbid', 'twilight',
'viridis', 'ylgn', 'ylgnbu', 'ylorbr', 'ylorrd'].
Appending '_r' to a named colorscale reverses it.
Returns
-------
str
"""
return self["colorscale"]
@colorscale.setter
def colorscale(self, val):
self["colorscale"] = val
# colorsrc
# --------
@property
def colorsrc(self):
"""
Sets the source reference on Chart Studio Cloud for color .
The 'colorsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["colorsrc"]
@colorsrc.setter
def colorsrc(self, val):
self["colorsrc"] = val
# gradient
# --------
@property
def gradient(self):
"""
The 'gradient' property is an instance of Gradient
that may be specified as:
- An instance of :class:`plotly.graph_objs.scattercarpet.marker.Gradient`
- A dict of string/value properties that will be passed
to the Gradient constructor
Supported dict properties:
color
Sets the final color of the gradient fill: the
center color for radial, the right for
horizontal, or the bottom for vertical.
colorsrc
Sets the source reference on Chart Studio Cloud
for color .
type
Sets the type of gradient used to fill the
markers
typesrc
Sets the source reference on Chart Studio Cloud
for type .
Returns
-------
plotly.graph_objs.scattercarpet.marker.Gradient
"""
return self["gradient"]
@gradient.setter
def gradient(self, val):
self["gradient"] = val
# line
# ----
@property
def line(self):
"""
The 'line' property is an instance of Line
that may be specified as:
- An instance of :class:`plotly.graph_objs.scattercarpet.marker.Line`
- A dict of string/value properties that will be passed
to the Line constructor
Supported dict properties:
autocolorscale
Determines whether the colorscale is a default
palette (`autocolorscale: true`) or the palette
determined by `marker.line.colorscale`. Has an
effect only if in `marker.line.color`is set to
a numerical array. In case `colorscale` is
unspecified or `autocolorscale` is true, the
default palette will be chosen according to
whether numbers in the `color` array are all
positive, all negative or mixed.
cauto
Determines whether or not the color domain is
computed with respect to the input data (here
in `marker.line.color`) or the bounds set in
`marker.line.cmin` and `marker.line.cmax` Has
an effect only if in `marker.line.color`is set
to a numerical array. Defaults to `false` when
`marker.line.cmin` and `marker.line.cmax` are
set by the user.
cmax
Sets the upper bound of the color domain. Has
an effect only if in `marker.line.color`is set
to a numerical array. Value should have the
same units as in `marker.line.color` and if
set, `marker.line.cmin` must be set as well.
cmid
Sets the mid-point of the color domain by
scaling `marker.line.cmin` and/or
`marker.line.cmax` to be equidistant to this
point. Has an effect only if in
`marker.line.color`is set to a numerical array.
Value should have the same units as in
`marker.line.color`. Has no effect when
`marker.line.cauto` is `false`.
cmin
Sets the lower bound of the color domain. Has
an effect only if in `marker.line.color`is set
to a numerical array. Value should have the
same units as in `marker.line.color` and if
set, `marker.line.cmax` must be set as well.
color
Sets themarker.linecolor. It accepts either a
specific color or an array of numbers that are
mapped to the colorscale relative to the max
and min values of the array or relative to
`marker.line.cmin` and `marker.line.cmax` if
set.
coloraxis
Sets a reference to a shared color axis.
References to these shared color axes are
"coloraxis", "coloraxis2", "coloraxis3", etc.
Settings for these shared color axes are set in
the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple
color scales can be linked to the same color
axis.
colorscale
Sets the colorscale. Has an effect only if in
`marker.line.color`is set to a numerical array.
The colorscale must be an array containing
arrays mapping a normalized value to an rgb,
rgba, hex, hsl, hsv, or named color string. At
minimum, a mapping for the lowest (0) and
highest (1) values are required. For example,
`[[0, 'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`.
To control the bounds of the colorscale in
color space, use`marker.line.cmin` and
`marker.line.cmax`. Alternatively, `colorscale`
may be a palette name string of the following
list: Greys,YlGnBu,Greens,YlOrRd,Bluered,RdBu,R
eds,Blues,Picnic,Rainbow,Portland,Jet,Hot,Black
body,Earth,Electric,Viridis,Cividis.
colorsrc
Sets the source reference on Chart Studio Cloud
for color .
reversescale
Reverses the color mapping if true. Has an
effect only if in `marker.line.color`is set to
a numerical array. If true, `marker.line.cmin`
will correspond to the last color in the array
and `marker.line.cmax` will correspond to the
first color.
width
Sets the width (in px) of the lines bounding
the marker points.
widthsrc
Sets the source reference on Chart Studio Cloud
for width .
Returns
-------
plotly.graph_objs.scattercarpet.marker.Line
"""
return self["line"]
@line.setter
def line(self, val):
self["line"] = val
# maxdisplayed
# ------------
@property
def maxdisplayed(self):
"""
Sets a maximum number of points to be drawn on the graph. 0
corresponds to no limit.
The 'maxdisplayed' property is a number and may be specified as:
- An int or float in the interval [0, inf]
Returns
-------
int|float
"""
return self["maxdisplayed"]
@maxdisplayed.setter
def maxdisplayed(self, val):
self["maxdisplayed"] = val
# opacity
# -------
@property
def opacity(self):
"""
Sets the marker opacity.
The 'opacity' property is a number and may be specified as:
- An int or float in the interval [0, 1]
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
int|float|numpy.ndarray
"""
return self["opacity"]
@opacity.setter
def opacity(self, val):
self["opacity"] = val
# opacitysrc
# ----------
@property
def opacitysrc(self):
"""
Sets the source reference on Chart Studio Cloud for opacity .
The 'opacitysrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["opacitysrc"]
@opacitysrc.setter
def opacitysrc(self, val):
self["opacitysrc"] = val
# reversescale
# ------------
@property
def reversescale(self):
"""
Reverses the color mapping if true. Has an effect only if in
`marker.color`is set to a numerical array. If true,
`marker.cmin` will correspond to the last color in the array
and `marker.cmax` will correspond to the first color.
The 'reversescale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["reversescale"]
@reversescale.setter
def reversescale(self, val):
self["reversescale"] = val
# showscale
# ---------
@property
def showscale(self):
"""
Determines whether or not a colorbar is displayed for this
trace. Has an effect only if in `marker.color`is set to a
numerical array.
The 'showscale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["showscale"]
@showscale.setter
def showscale(self, val):
self["showscale"] = val
# size
# ----
@property
def size(self):
"""
Sets the marker size (in px).
The 'size' property is a number and may be specified as:
- An int or float in the interval [0, inf]
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
int|float|numpy.ndarray
"""
return self["size"]
@size.setter
def size(self, val):
self["size"] = val
# sizemin
# -------
@property
def sizemin(self):
"""
Has an effect only if `marker.size` is set to a numerical
array. Sets the minimum size (in px) of the rendered marker
points.
The 'sizemin' property is a number and may be specified as:
- An int or float in the interval [0, inf]
Returns
-------
int|float
"""
return self["sizemin"]
@sizemin.setter
def sizemin(self, val):
self["sizemin"] = val
# sizemode
# --------
@property
def sizemode(self):
"""
Has an effect only if `marker.size` is set to a numerical
array. Sets the rule for which the data in `size` is converted
to pixels.
The 'sizemode' property is an enumeration that may be specified as:
- One of the following enumeration values:
['diameter', 'area']
Returns
-------
Any
"""
return self["sizemode"]
@sizemode.setter
def sizemode(self, val):
self["sizemode"] = val
# sizeref
# -------
@property
def sizeref(self):
"""
Has an effect only if `marker.size` is set to a numerical
array. Sets the scale factor used to determine the rendered
size of marker points. Use with `sizemin` and `sizemode`.
The 'sizeref' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["sizeref"]
@sizeref.setter
def sizeref(self, val):
self["sizeref"] = val
# sizesrc
# -------
@property
def sizesrc(self):
"""
Sets the source reference on Chart Studio Cloud for size .
The 'sizesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["sizesrc"]
@sizesrc.setter
def sizesrc(self, val):
self["sizesrc"] = val
# symbol
# ------
@property
def symbol(self):
"""
Sets the marker symbol type. Adding 100 is equivalent to
appending "-open" to a symbol name. Adding 200 is equivalent to
appending "-dot" to a symbol name. Adding 300 is equivalent to
appending "-open-dot" or "dot-open" to a symbol name.
The 'symbol' property is an enumeration that may be specified as:
- One of the following enumeration values:
[0, '0', 'circle', 100, '100', 'circle-open', 200, '200',
'circle-dot', 300, '300', 'circle-open-dot', 1, '1',
'square', 101, '101', 'square-open', 201, '201',
'square-dot', 301, '301', 'square-open-dot', 2, '2',
'diamond', 102, '102', 'diamond-open', 202, '202',
'diamond-dot', 302, '302', 'diamond-open-dot', 3, '3',
'cross', 103, '103', 'cross-open', 203, '203',
'cross-dot', 303, '303', 'cross-open-dot', 4, '4', 'x',
104, '104', 'x-open', 204, '204', 'x-dot', 304, '304',
'x-open-dot', 5, '5', 'triangle-up', 105, '105',
'triangle-up-open', 205, '205', 'triangle-up-dot', 305,
'305', 'triangle-up-open-dot', 6, '6', 'triangle-down',
106, '106', 'triangle-down-open', 206, '206',
'triangle-down-dot', 306, '306', 'triangle-down-open-dot',
7, '7', 'triangle-left', 107, '107', 'triangle-left-open',
207, '207', 'triangle-left-dot', 307, '307',
'triangle-left-open-dot', 8, '8', 'triangle-right', 108,
'108', 'triangle-right-open', 208, '208',
'triangle-right-dot', 308, '308',
'triangle-right-open-dot', 9, '9', 'triangle-ne', 109,
'109', 'triangle-ne-open', 209, '209', 'triangle-ne-dot',
309, '309', 'triangle-ne-open-dot', 10, '10',
'triangle-se', 110, '110', 'triangle-se-open', 210, '210',
'triangle-se-dot', 310, '310', 'triangle-se-open-dot', 11,
'11', 'triangle-sw', 111, '111', 'triangle-sw-open', 211,
'211', 'triangle-sw-dot', 311, '311',
'triangle-sw-open-dot', 12, '12', 'triangle-nw', 112,
'112', 'triangle-nw-open', 212, '212', 'triangle-nw-dot',
312, '312', 'triangle-nw-open-dot', 13, '13', 'pentagon',
113, '113', 'pentagon-open', 213, '213', 'pentagon-dot',
313, '313', 'pentagon-open-dot', 14, '14', 'hexagon', 114,
'114', 'hexagon-open', 214, '214', 'hexagon-dot', 314,
'314', 'hexagon-open-dot', 15, '15', 'hexagon2', 115,
'115', 'hexagon2-open', 215, '215', 'hexagon2-dot', 315,
'315', 'hexagon2-open-dot', 16, '16', 'octagon', 116,
'116', 'octagon-open', 216, '216', 'octagon-dot', 316,
'316', 'octagon-open-dot', 17, '17', 'star', 117, '117',
'star-open', 217, '217', 'star-dot', 317, '317',
'star-open-dot', 18, '18', 'hexagram', 118, '118',
'hexagram-open', 218, '218', 'hexagram-dot', 318, '318',
'hexagram-open-dot', 19, '19', 'star-triangle-up', 119,
'119', 'star-triangle-up-open', 219, '219',
'star-triangle-up-dot', 319, '319',
'star-triangle-up-open-dot', 20, '20',
'star-triangle-down', 120, '120',
'star-triangle-down-open', 220, '220',
'star-triangle-down-dot', 320, '320',
'star-triangle-down-open-dot', 21, '21', 'star-square',
121, '121', 'star-square-open', 221, '221',
'star-square-dot', 321, '321', 'star-square-open-dot', 22,
'22', 'star-diamond', 122, '122', 'star-diamond-open',
222, '222', 'star-diamond-dot', 322, '322',
'star-diamond-open-dot', 23, '23', 'diamond-tall', 123,
'123', 'diamond-tall-open', 223, '223',
'diamond-tall-dot', 323, '323', 'diamond-tall-open-dot',
24, '24', 'diamond-wide', 124, '124', 'diamond-wide-open',
224, '224', 'diamond-wide-dot', 324, '324',
'diamond-wide-open-dot', 25, '25', 'hourglass', 125,
'125', 'hourglass-open', 26, '26', 'bowtie', 126, '126',
'bowtie-open', 27, '27', 'circle-cross', 127, '127',
'circle-cross-open', 28, '28', 'circle-x', 128, '128',
'circle-x-open', 29, '29', 'square-cross', 129, '129',
'square-cross-open', 30, '30', 'square-x', 130, '130',
'square-x-open', 31, '31', 'diamond-cross', 131, '131',
'diamond-cross-open', 32, '32', 'diamond-x', 132, '132',
'diamond-x-open', 33, '33', 'cross-thin', 133, '133',
'cross-thin-open', 34, '34', 'x-thin', 134, '134',
'x-thin-open', 35, '35', 'asterisk', 135, '135',
'asterisk-open', 36, '36', 'hash', 136, '136',
'hash-open', 236, '236', 'hash-dot', 336, '336',
'hash-open-dot', 37, '37', 'y-up', 137, '137',
'y-up-open', 38, '38', 'y-down', 138, '138',
'y-down-open', 39, '39', 'y-left', 139, '139',
'y-left-open', 40, '40', 'y-right', 140, '140',
'y-right-open', 41, '41', 'line-ew', 141, '141',
'line-ew-open', 42, '42', 'line-ns', 142, '142',
'line-ns-open', 43, '43', 'line-ne', 143, '143',
'line-ne-open', 44, '44', 'line-nw', 144, '144',
'line-nw-open', 45, '45', 'arrow-up', 145, '145',
'arrow-up-open', 46, '46', 'arrow-down', 146, '146',
'arrow-down-open', 47, '47', 'arrow-left', 147, '147',
'arrow-left-open', 48, '48', 'arrow-right', 148, '148',
'arrow-right-open', 49, '49', 'arrow-bar-up', 149, '149',
'arrow-bar-up-open', 50, '50', 'arrow-bar-down', 150,
'150', 'arrow-bar-down-open', 51, '51', 'arrow-bar-left',
151, '151', 'arrow-bar-left-open', 52, '52',
'arrow-bar-right', 152, '152', 'arrow-bar-right-open']
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["symbol"]
@symbol.setter
def symbol(self, val):
self["symbol"] = val
# symbolsrc
# ---------
@property
def symbolsrc(self):
"""
Sets the source reference on Chart Studio Cloud for symbol .
The 'symbolsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["symbolsrc"]
@symbolsrc.setter
def symbolsrc(self, val):
self["symbolsrc"] = val
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
autocolorscale
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`marker.colorscale`. Has an effect only if in
`marker.color`is set to a numerical array. In case
`colorscale` is unspecified or `autocolorscale` is
true, the default palette will be chosen according to
whether numbers in the `color` array are all positive,
all negative or mixed.
cauto
Determines whether or not the color domain is computed
with respect to the input data (here in `marker.color`)
or the bounds set in `marker.cmin` and `marker.cmax`
Has an effect only if in `marker.color`is set to a
numerical array. Defaults to `false` when `marker.cmin`
and `marker.cmax` are set by the user.
cmax
Sets the upper bound of the color domain. Has an effect
only if in `marker.color`is set to a numerical array.
Value should have the same units as in `marker.color`
and if set, `marker.cmin` must be set as well.
cmid
Sets the mid-point of the color domain by scaling
`marker.cmin` and/or `marker.cmax` to be equidistant to
this point. Has an effect only if in `marker.color`is
set to a numerical array. Value should have the same
units as in `marker.color`. Has no effect when
`marker.cauto` is `false`.
cmin
Sets the lower bound of the color domain. Has an effect
only if in `marker.color`is set to a numerical array.
Value should have the same units as in `marker.color`
and if set, `marker.cmax` must be set as well.
color
Sets themarkercolor. It accepts either a specific color
or an array of numbers that are mapped to the
colorscale relative to the max and min values of the
array or relative to `marker.cmin` and `marker.cmax` if
set.
coloraxis
Sets a reference to a shared color axis. References to
these shared color axes are "coloraxis", "coloraxis2",
"coloraxis3", etc. Settings for these shared color axes
are set in the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple color
scales can be linked to the same color axis.
colorbar
:class:`plotly.graph_objects.scattercarpet.marker.Color
Bar` instance or dict with compatible properties
colorscale
Sets the colorscale. Has an effect only if in
`marker.color`is set to a numerical array. The
colorscale must be an array containing arrays mapping a
normalized value to an rgb, rgba, hex, hsl, hsv, or
named color string. At minimum, a mapping for the
lowest (0) and highest (1) values are required. For
example, `[[0, 'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`.
To control the bounds of the colorscale in color space,
use`marker.cmin` and `marker.cmax`. Alternatively,
`colorscale` may be a palette name string of the
following list: Greys,YlGnBu,Greens,YlOrRd,Bluered,RdBu
,Reds,Blues,Picnic,Rainbow,Portland,Jet,Hot,Blackbody,E
arth,Electric,Viridis,Cividis.
colorsrc
Sets the source reference on Chart Studio Cloud for
color .
gradient
:class:`plotly.graph_objects.scattercarpet.marker.Gradi
ent` instance or dict with compatible properties
line
:class:`plotly.graph_objects.scattercarpet.marker.Line`
instance or dict with compatible properties
maxdisplayed
Sets a maximum number of points to be drawn on the
graph. 0 corresponds to no limit.
opacity
Sets the marker opacity.
opacitysrc
Sets the source reference on Chart Studio Cloud for
opacity .
reversescale
Reverses the color mapping if true. Has an effect only
if in `marker.color`is set to a numerical array. If
true, `marker.cmin` will correspond to the last color
in the array and `marker.cmax` will correspond to the
first color.
showscale
Determines whether or not a colorbar is displayed for
this trace. Has an effect only if in `marker.color`is
set to a numerical array.
size
Sets the marker size (in px).
sizemin
Has an effect only if `marker.size` is set to a
numerical array. Sets the minimum size (in px) of the
rendered marker points.
sizemode
Has an effect only if `marker.size` is set to a
numerical array. Sets the rule for which the data in
`size` is converted to pixels.
sizeref
Has an effect only if `marker.size` is set to a
numerical array. Sets the scale factor used to
determine the rendered size of marker points. Use with
`sizemin` and `sizemode`.
sizesrc
Sets the source reference on Chart Studio Cloud for
size .
symbol
Sets the marker symbol type. Adding 100 is equivalent
to appending "-open" to a symbol name. Adding 200 is
equivalent to appending "-dot" to a symbol name. Adding
300 is equivalent to appending "-open-dot" or "dot-
open" to a symbol name.
symbolsrc
Sets the source reference on Chart Studio Cloud for
symbol .
"""
def __init__(
self,
arg=None,
autocolorscale=None,
cauto=None,
cmax=None,
cmid=None,
cmin=None,
color=None,
coloraxis=None,
colorbar=None,
colorscale=None,
colorsrc=None,
gradient=None,
line=None,
maxdisplayed=None,
opacity=None,
opacitysrc=None,
reversescale=None,
showscale=None,
size=None,
sizemin=None,
sizemode=None,
sizeref=None,
sizesrc=None,
symbol=None,
symbolsrc=None,
**kwargs
):
"""
Construct a new Marker object
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of
:class:`plotly.graph_objs.scattercarpet.Marker`
autocolorscale
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`marker.colorscale`. Has an effect only if in
`marker.color`is set to a numerical array. In case
`colorscale` is unspecified or `autocolorscale` is
true, the default palette will be chosen according to
whether numbers in the `color` array are all positive,
all negative or mixed.
cauto
Determines whether or not the color domain is computed
with respect to the input data (here in `marker.color`)
or the bounds set in `marker.cmin` and `marker.cmax`
Has an effect only if in `marker.color`is set to a
numerical array. Defaults to `false` when `marker.cmin`
and `marker.cmax` are set by the user.
cmax
Sets the upper bound of the color domain. Has an effect
only if in `marker.color`is set to a numerical array.
Value should have the same units as in `marker.color`
and if set, `marker.cmin` must be set as well.
cmid
Sets the mid-point of the color domain by scaling
`marker.cmin` and/or `marker.cmax` to be equidistant to
this point. Has an effect only if in `marker.color`is
set to a numerical array. Value should have the same
units as in `marker.color`. Has no effect when
`marker.cauto` is `false`.
cmin
Sets the lower bound of the color domain. Has an effect
only if in `marker.color`is set to a numerical array.
Value should have the same units as in `marker.color`
and if set, `marker.cmax` must be set as well.
color
Sets themarkercolor. It accepts either a specific color
or an array of numbers that are mapped to the
colorscale relative to the max and min values of the
array or relative to `marker.cmin` and `marker.cmax` if
set.
coloraxis
Sets a reference to a shared color axis. References to
these shared color axes are "coloraxis", "coloraxis2",
"coloraxis3", etc. Settings for these shared color axes
are set in the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple color
scales can be linked to the same color axis.
colorbar
:class:`plotly.graph_objects.scattercarpet.marker.Color
Bar` instance or dict with compatible properties
colorscale
Sets the colorscale. Has an effect only if in
`marker.color`is set to a numerical array. The
colorscale must be an array containing arrays mapping a
normalized value to an rgb, rgba, hex, hsl, hsv, or
named color string. At minimum, a mapping for the
lowest (0) and highest (1) values are required. For
example, `[[0, 'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`.
To control the bounds of the colorscale in color space,
use`marker.cmin` and `marker.cmax`. Alternatively,
`colorscale` may be a palette name string of the
following list: Greys,YlGnBu,Greens,YlOrRd,Bluered,RdBu
,Reds,Blues,Picnic,Rainbow,Portland,Jet,Hot,Blackbody,E
arth,Electric,Viridis,Cividis.
colorsrc
Sets the source reference on Chart Studio Cloud for
color .
gradient
:class:`plotly.graph_objects.scattercarpet.marker.Gradi
ent` instance or dict with compatible properties
line
:class:`plotly.graph_objects.scattercarpet.marker.Line`
instance or dict with compatible properties
maxdisplayed
Sets a maximum number of points to be drawn on the
graph. 0 corresponds to no limit.
opacity
Sets the marker opacity.
opacitysrc
Sets the source reference on Chart Studio Cloud for
opacity .
reversescale
Reverses the color mapping if true. Has an effect only
if in `marker.color`is set to a numerical array. If
true, `marker.cmin` will correspond to the last color
in the array and `marker.cmax` will correspond to the
first color.
showscale
Determines whether or not a colorbar is displayed for
this trace. Has an effect only if in `marker.color`is
set to a numerical array.
size
Sets the marker size (in px).
sizemin
Has an effect only if `marker.size` is set to a
numerical array. Sets the minimum size (in px) of the
rendered marker points.
sizemode
Has an effect only if `marker.size` is set to a
numerical array. Sets the rule for which the data in
`size` is converted to pixels.
sizeref
Has an effect only if `marker.size` is set to a
numerical array. Sets the scale factor used to
determine the rendered size of marker points. Use with
`sizemin` and `sizemode`.
sizesrc
Sets the source reference on Chart Studio Cloud for
size .
symbol
Sets the marker symbol type. Adding 100 is equivalent
to appending "-open" to a symbol name. Adding 200 is
equivalent to appending "-dot" to a symbol name. Adding
300 is equivalent to appending "-open-dot" or "dot-
open" to a symbol name.
symbolsrc
Sets the source reference on Chart Studio Cloud for
symbol .
Returns
-------
Marker
"""
super(Marker, self).__init__("marker")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.scattercarpet.Marker
constructor must be a dict or
an instance of :class:`plotly.graph_objs.scattercarpet.Marker`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("autocolorscale", None)
_v = autocolorscale if autocolorscale is not None else _v
if _v is not None:
self["autocolorscale"] = _v
_v = arg.pop("cauto", None)
_v = cauto if cauto is not None else _v
if _v is not None:
self["cauto"] = _v
_v = arg.pop("cmax", None)
_v = cmax if cmax is not None else _v
if _v is not None:
self["cmax"] = _v
_v = arg.pop("cmid", None)
_v = cmid if cmid is not None else _v
if _v is not None:
self["cmid"] = _v
_v = arg.pop("cmin", None)
_v = cmin if cmin is not None else _v
if _v is not None:
self["cmin"] = _v
_v = arg.pop("color", None)
_v = color if color is not None else _v
if _v is not None:
self["color"] = _v
_v = arg.pop("coloraxis", None)
_v = coloraxis if coloraxis is not None else _v
if _v is not None:
self["coloraxis"] = _v
_v = arg.pop("colorbar", None)
_v = colorbar if colorbar is not None else _v
if _v is not None:
self["colorbar"] = _v
_v = arg.pop("colorscale", None)
_v = colorscale if colorscale is not None else _v
if _v is not None:
self["colorscale"] = _v
_v = arg.pop("colorsrc", None)
_v = colorsrc if colorsrc is not None else _v
if _v is not None:
self["colorsrc"] = _v
_v = arg.pop("gradient", None)
_v = gradient if gradient is not None else _v
if _v is not None:
self["gradient"] = _v
_v = arg.pop("line", None)
_v = line if line is not None else _v
if _v is not None:
self["line"] = _v
_v = arg.pop("maxdisplayed", None)
_v = maxdisplayed if maxdisplayed is not None else _v
if _v is not None:
self["maxdisplayed"] = _v
_v = arg.pop("opacity", None)
_v = opacity if opacity is not None else _v
if _v is not None:
self["opacity"] = _v
_v = arg.pop("opacitysrc", None)
_v = opacitysrc if opacitysrc is not None else _v
if _v is not None:
self["opacitysrc"] = _v
_v = arg.pop("reversescale", None)
_v = reversescale if reversescale is not None else _v
if _v is not None:
self["reversescale"] = _v
_v = arg.pop("showscale", None)
_v = showscale if showscale is not None else _v
if _v is not None:
self["showscale"] = _v
_v = arg.pop("size", None)
_v = size if size is not None else _v
if _v is not None:
self["size"] = _v
_v = arg.pop("sizemin", None)
_v = sizemin if sizemin is not None else _v
if _v is not None:
self["sizemin"] = _v
_v = arg.pop("sizemode", None)
_v = sizemode if sizemode is not None else _v
if _v is not None:
self["sizemode"] = _v
_v = arg.pop("sizeref", None)
_v = sizeref if sizeref is not None else _v
if _v is not None:
self["sizeref"] = _v
_v = arg.pop("sizesrc", None)
_v = sizesrc if sizesrc is not None else _v
if _v is not None:
self["sizesrc"] = _v
_v = arg.pop("symbol", None)
_v = symbol if symbol is not None else _v
if _v is not None:
self["symbol"] = _v
_v = arg.pop("symbolsrc", None)
_v = symbolsrc if symbolsrc is not None else _v
if _v is not None:
self["symbolsrc"] = _v
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
| {
"pile_set_name": "Github"
} |
<!doctype html>
<html class="default no-js">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>BlendMode | Hydro-SDK</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="../assets/css/main.css">
</head>
<body>
<header>
<div class="tsd-page-toolbar">
<div class="container">
<div class="table-wrap">
<div class="table-cell" id="tsd-search" data-index="../assets/js/search.js" data-base="..">
<div class="field">
<label for="tsd-search-field" class="tsd-widget search no-caption">Search</label>
<input id="tsd-search-field" type="text" />
</div>
<ul class="results">
<li class="state loading">Preparing search index...</li>
<li class="state failure">The search index is not available</li>
</ul>
<a href="../index.html" class="title">Hydro-SDK</a>
</div>
<div class="table-cell" id="tsd-widgets">
<div id="tsd-filter">
<a href="#" class="tsd-widget options no-caption" data-toggle="options">Options</a>
<div class="tsd-filter-group">
<div class="tsd-select" id="tsd-filter-visibility">
<span class="tsd-select-label">All</span>
<ul class="tsd-select-list">
<li data-value="public">Public</li>
<li data-value="protected">Public/Protected</li>
<li data-value="private" class="selected">All</li>
</ul>
</div>
<input type="checkbox" id="tsd-filter-inherited" checked />
<label class="tsd-widget" for="tsd-filter-inherited">Inherited</label>
<input type="checkbox" id="tsd-filter-externals" checked />
<label class="tsd-widget" for="tsd-filter-externals">Externals</label>
<input type="checkbox" id="tsd-filter-only-exported" />
<label class="tsd-widget" for="tsd-filter-only-exported">Only exported</label>
</div>
</div>
<a href="#" class="tsd-widget menu no-caption" data-toggle="menu">Menu</a>
</div>
</div>
</div>
</div>
<div class="tsd-page-title">
<div class="container">
<ul class="tsd-breadcrumb">
<li>
<a href="../index.html">Globals</a>
</li>
<li>
<a href="../modules/_dart_ui_index_.html">"dart/ui/index"</a>
</li>
<li>
<a href="_dart_ui_index_.blendmode.html">BlendMode</a>
</li>
</ul>
<h1>Enumeration BlendMode</h1>
</div>
</div>
</header>
<div class="container container-main">
<div class="row">
<div class="col-8 col-content">
<section class="tsd-panel-group tsd-index-group">
<h2>Index</h2>
<section class="tsd-panel tsd-index-panel">
<div class="tsd-index-content">
<section class="tsd-index-section ">
<h3>Enumeration members</h3>
<ul class="tsd-index-list">
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#clear" class="tsd-kind-icon">clear</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#color" class="tsd-kind-icon">color</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#colorburn" class="tsd-kind-icon">color<wbr>Burn</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#colordodge" class="tsd-kind-icon">color<wbr>Dodge</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#darken" class="tsd-kind-icon">darken</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#difference" class="tsd-kind-icon">difference</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#dst" class="tsd-kind-icon">dst</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#dstatop" class="tsd-kind-icon">dst<wbr>Atop</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#dstin" class="tsd-kind-icon">dst<wbr>In</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#dstout" class="tsd-kind-icon">dst<wbr>Out</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#dstover" class="tsd-kind-icon">dst<wbr>Over</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#exclusion" class="tsd-kind-icon">exclusion</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#hardlight" class="tsd-kind-icon">hard<wbr>Light</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#hue" class="tsd-kind-icon">hue</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#lighten" class="tsd-kind-icon">lighten</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#luminosity" class="tsd-kind-icon">luminosity</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#modulate" class="tsd-kind-icon">modulate</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#multiply" class="tsd-kind-icon">multiply</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#overlay" class="tsd-kind-icon">overlay</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#plus" class="tsd-kind-icon">plus</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#saturation" class="tsd-kind-icon">saturation</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#screen" class="tsd-kind-icon">screen</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#softlight" class="tsd-kind-icon">soft<wbr>Light</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#src" class="tsd-kind-icon">src</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#srcatop" class="tsd-kind-icon">srcATop</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#srcin" class="tsd-kind-icon">src<wbr>In</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#srcout" class="tsd-kind-icon">src<wbr>Out</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#srcover" class="tsd-kind-icon">src<wbr>Over</a></li>
<li class="tsd-kind-enum-member tsd-parent-kind-enum"><a href="_dart_ui_index_.blendmode.html#xor" class="tsd-kind-icon">xor</a></li>
</ul>
</section>
</div>
</section>
</section>
<section class="tsd-panel-group tsd-member-group ">
<h2>Enumeration members</h2>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="clear" class="tsd-anchor"></a>
<h3>clear</h3>
<div class="tsd-signature tsd-kind-icon">clear<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L2">dart/ui/blendMode.ts:2</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="color" class="tsd-anchor"></a>
<h3>color</h3>
<div class="tsd-signature tsd-kind-icon">color<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L29">dart/ui/blendMode.ts:29</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="colorburn" class="tsd-anchor"></a>
<h3>color<wbr>Burn</h3>
<div class="tsd-signature tsd-kind-icon">color<wbr>Burn<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L21">dart/ui/blendMode.ts:21</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="colordodge" class="tsd-anchor"></a>
<h3>color<wbr>Dodge</h3>
<div class="tsd-signature tsd-kind-icon">color<wbr>Dodge<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L20">dart/ui/blendMode.ts:20</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="darken" class="tsd-anchor"></a>
<h3>darken</h3>
<div class="tsd-signature tsd-kind-icon">darken<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L18">dart/ui/blendMode.ts:18</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="difference" class="tsd-anchor"></a>
<h3>difference</h3>
<div class="tsd-signature tsd-kind-icon">difference<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L24">dart/ui/blendMode.ts:24</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="dst" class="tsd-anchor"></a>
<h3>dst</h3>
<div class="tsd-signature tsd-kind-icon">dst<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L4">dart/ui/blendMode.ts:4</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="dstatop" class="tsd-anchor"></a>
<h3>dst<wbr>Atop</h3>
<div class="tsd-signature tsd-kind-icon">dst<wbr>Atop<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L12">dart/ui/blendMode.ts:12</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="dstin" class="tsd-anchor"></a>
<h3>dst<wbr>In</h3>
<div class="tsd-signature tsd-kind-icon">dst<wbr>In<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L8">dart/ui/blendMode.ts:8</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="dstout" class="tsd-anchor"></a>
<h3>dst<wbr>Out</h3>
<div class="tsd-signature tsd-kind-icon">dst<wbr>Out<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L10">dart/ui/blendMode.ts:10</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="dstover" class="tsd-anchor"></a>
<h3>dst<wbr>Over</h3>
<div class="tsd-signature tsd-kind-icon">dst<wbr>Over<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L6">dart/ui/blendMode.ts:6</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="exclusion" class="tsd-anchor"></a>
<h3>exclusion</h3>
<div class="tsd-signature tsd-kind-icon">exclusion<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L25">dart/ui/blendMode.ts:25</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="hardlight" class="tsd-anchor"></a>
<h3>hard<wbr>Light</h3>
<div class="tsd-signature tsd-kind-icon">hard<wbr>Light<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L22">dart/ui/blendMode.ts:22</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="hue" class="tsd-anchor"></a>
<h3>hue</h3>
<div class="tsd-signature tsd-kind-icon">hue<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L27">dart/ui/blendMode.ts:27</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="lighten" class="tsd-anchor"></a>
<h3>lighten</h3>
<div class="tsd-signature tsd-kind-icon">lighten<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L19">dart/ui/blendMode.ts:19</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="luminosity" class="tsd-anchor"></a>
<h3>luminosity</h3>
<div class="tsd-signature tsd-kind-icon">luminosity<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L30">dart/ui/blendMode.ts:30</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="modulate" class="tsd-anchor"></a>
<h3>modulate</h3>
<div class="tsd-signature tsd-kind-icon">modulate<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L15">dart/ui/blendMode.ts:15</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="multiply" class="tsd-anchor"></a>
<h3>multiply</h3>
<div class="tsd-signature tsd-kind-icon">multiply<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L26">dart/ui/blendMode.ts:26</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="overlay" class="tsd-anchor"></a>
<h3>overlay</h3>
<div class="tsd-signature tsd-kind-icon">overlay<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L17">dart/ui/blendMode.ts:17</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="plus" class="tsd-anchor"></a>
<h3>plus</h3>
<div class="tsd-signature tsd-kind-icon">plus<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L14">dart/ui/blendMode.ts:14</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="saturation" class="tsd-anchor"></a>
<h3>saturation</h3>
<div class="tsd-signature tsd-kind-icon">saturation<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L28">dart/ui/blendMode.ts:28</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="screen" class="tsd-anchor"></a>
<h3>screen</h3>
<div class="tsd-signature tsd-kind-icon">screen<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L16">dart/ui/blendMode.ts:16</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="softlight" class="tsd-anchor"></a>
<h3>soft<wbr>Light</h3>
<div class="tsd-signature tsd-kind-icon">soft<wbr>Light<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L23">dart/ui/blendMode.ts:23</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="src" class="tsd-anchor"></a>
<h3>src</h3>
<div class="tsd-signature tsd-kind-icon">src<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L3">dart/ui/blendMode.ts:3</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="srcatop" class="tsd-anchor"></a>
<h3>srcATop</h3>
<div class="tsd-signature tsd-kind-icon">srcATop<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L11">dart/ui/blendMode.ts:11</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="srcin" class="tsd-anchor"></a>
<h3>src<wbr>In</h3>
<div class="tsd-signature tsd-kind-icon">src<wbr>In<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L7">dart/ui/blendMode.ts:7</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="srcout" class="tsd-anchor"></a>
<h3>src<wbr>Out</h3>
<div class="tsd-signature tsd-kind-icon">src<wbr>Out<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L9">dart/ui/blendMode.ts:9</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="srcover" class="tsd-anchor"></a>
<h3>src<wbr>Over</h3>
<div class="tsd-signature tsd-kind-icon">src<wbr>Over<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L5">dart/ui/blendMode.ts:5</a></li>
</ul>
</aside>
</section>
<section class="tsd-panel tsd-member tsd-kind-enum-member tsd-parent-kind-enum">
<a name="xor" class="tsd-anchor"></a>
<h3>xor</h3>
<div class="tsd-signature tsd-kind-icon">xor<span class="tsd-signature-symbol">:</span></div>
<aside class="tsd-sources">
<ul>
<li>Defined in <a href="https://github.com/chgibb/flua/blob/70950f6/runtime/dart/ui/blendMode.ts#L13">dart/ui/blendMode.ts:13</a></li>
</ul>
</aside>
</section>
</section>
</div>
<div class="col-4 col-menu menu-sticky-wrap menu-highlight">
<nav class="tsd-navigation primary">
<ul>
<li class="globals ">
<a href="../index.html"><em>Globals</em></a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_dart_async_index_.html">"dart/async/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_dart_collection_index_.html">"dart/collection/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_dart_convert_index_.html">"dart/convert/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_dart_core_index_.html">"dart/core/index"</a>
</li>
<li class="current tsd-kind-module">
<a href="../modules/_dart_ui_index_.html">"dart/ui/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_animation_index_.html">"flutter/animation/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_cupertino_index_.html">"flutter/cupertino/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_gestures_index_.html">"flutter/gestures/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_material_index_.html">"flutter/material/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_painting_index_.html">"flutter/painting/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_rendering_index_.html">"flutter/rendering/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_services_index_.html">"flutter/services/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_flutter_widgets_index_.html">"flutter/widgets/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_http_index_.html">"http/index"</a>
</li>
<li class=" tsd-kind-module">
<a href="../modules/_scopedmodel_index_.html">"scoped<wbr>Model/index"</a>
</li>
</ul>
</nav>
<nav class="tsd-navigation secondary menu-sticky">
<ul class="before-current">
</ul>
<ul class="current">
<li class="current tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.blendmode.html" class="tsd-kind-icon">Blend<wbr>Mode</a>
<ul>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#clear" class="tsd-kind-icon">clear</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#color" class="tsd-kind-icon">color</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#colorburn" class="tsd-kind-icon">color<wbr>Burn</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#colordodge" class="tsd-kind-icon">color<wbr>Dodge</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#darken" class="tsd-kind-icon">darken</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#difference" class="tsd-kind-icon">difference</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#dst" class="tsd-kind-icon">dst</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#dstatop" class="tsd-kind-icon">dst<wbr>Atop</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#dstin" class="tsd-kind-icon">dst<wbr>In</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#dstout" class="tsd-kind-icon">dst<wbr>Out</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#dstover" class="tsd-kind-icon">dst<wbr>Over</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#exclusion" class="tsd-kind-icon">exclusion</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#hardlight" class="tsd-kind-icon">hard<wbr>Light</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#hue" class="tsd-kind-icon">hue</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#lighten" class="tsd-kind-icon">lighten</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#luminosity" class="tsd-kind-icon">luminosity</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#modulate" class="tsd-kind-icon">modulate</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#multiply" class="tsd-kind-icon">multiply</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#overlay" class="tsd-kind-icon">overlay</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#plus" class="tsd-kind-icon">plus</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#saturation" class="tsd-kind-icon">saturation</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#screen" class="tsd-kind-icon">screen</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#softlight" class="tsd-kind-icon">soft<wbr>Light</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#src" class="tsd-kind-icon">src</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#srcatop" class="tsd-kind-icon">srcATop</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#srcin" class="tsd-kind-icon">src<wbr>In</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#srcout" class="tsd-kind-icon">src<wbr>Out</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#srcover" class="tsd-kind-icon">src<wbr>Over</a>
</li>
<li class=" tsd-kind-enum-member tsd-parent-kind-enum">
<a href="_dart_ui_index_.blendmode.html#xor" class="tsd-kind-icon">xor</a>
</li>
</ul>
</li>
</ul>
<ul class="after-current">
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.boxheightstyle.html" class="tsd-kind-icon">Box<wbr>Height<wbr>Style</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.boxwidthstyle.html" class="tsd-kind-icon">Box<wbr>Width<wbr>Style</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.brightness.html" class="tsd-kind-icon">Brightness</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.filterquality.html" class="tsd-kind-icon">Filter<wbr>Quality</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.fontstyle.html" class="tsd-kind-icon">Font<wbr>Style</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.textaffinity.html" class="tsd-kind-icon">Text<wbr>Affinity</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.textalign.html" class="tsd-kind-icon">Text<wbr>Align</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.textbaseline.html" class="tsd-kind-icon">Text<wbr>Baseline</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.textdirection.html" class="tsd-kind-icon">Text<wbr>Direction</a>
</li>
<li class=" tsd-kind-enum tsd-parent-kind-module">
<a href="_dart_ui_index_.tilemode.html" class="tsd-kind-icon">Tile<wbr>Mode</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.color.html" class="tsd-kind-icon">Color</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.fontweight.html" class="tsd-kind-icon">Font<wbr>Weight</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.offset.html" class="tsd-kind-icon">Offset</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.radius.html" class="tsd-kind-icon">Radius</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.rect.html" class="tsd-kind-icon">Rect</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.size.html" class="tsd-kind-icon">Size</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.textposition.html" class="tsd-kind-icon">Text<wbr>Position</a>
</li>
<li class=" tsd-kind-class tsd-parent-kind-module">
<a href="../classes/_dart_ui_index_.textrange.html" class="tsd-kind-icon">Text<wbr>Range</a>
</li>
<li class=" tsd-kind-interface tsd-parent-kind-module">
<a href="../interfaces/_dart_ui_index_.textpositionprops.html" class="tsd-kind-icon">Text<wbr>Position<wbr>Props</a>
</li>
<li class=" tsd-kind-interface tsd-parent-kind-module">
<a href="../interfaces/_dart_ui_index_.textrangeprops.html" class="tsd-kind-icon">Text<wbr>Range<wbr>Props</a>
</li>
<li class=" tsd-kind-type-alias tsd-parent-kind-module">
<a href="../modules/_dart_ui_index_.html#voidcallback" class="tsd-kind-icon">Void<wbr>Callback</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<footer class="with-border-bottom">
<div class="container">
<h2>Legend</h2>
<div class="tsd-legend-group">
<ul class="tsd-legend">
<li class="tsd-kind-module"><span class="tsd-kind-icon">Module</span></li>
<li class="tsd-kind-object-literal"><span class="tsd-kind-icon">Object literal</span></li>
<li class="tsd-kind-variable"><span class="tsd-kind-icon">Variable</span></li>
<li class="tsd-kind-function"><span class="tsd-kind-icon">Function</span></li>
<li class="tsd-kind-function tsd-has-type-parameter"><span class="tsd-kind-icon">Function with type parameter</span></li>
<li class="tsd-kind-index-signature"><span class="tsd-kind-icon">Index signature</span></li>
<li class="tsd-kind-type-alias"><span class="tsd-kind-icon">Type alias</span></li>
<li class="tsd-kind-type-alias tsd-has-type-parameter"><span class="tsd-kind-icon">Type alias with type parameter</span></li>
</ul>
<ul class="tsd-legend">
<li class="tsd-kind-enum"><span class="tsd-kind-icon">Enumeration</span></li>
<li class="tsd-kind-enum-member"><span class="tsd-kind-icon">Enumeration member</span></li>
<li class="tsd-kind-property tsd-parent-kind-enum"><span class="tsd-kind-icon">Property</span></li>
<li class="tsd-kind-method tsd-parent-kind-enum"><span class="tsd-kind-icon">Method</span></li>
</ul>
<ul class="tsd-legend">
<li class="tsd-kind-interface"><span class="tsd-kind-icon">Interface</span></li>
<li class="tsd-kind-interface tsd-has-type-parameter"><span class="tsd-kind-icon">Interface with type parameter</span></li>
<li class="tsd-kind-constructor tsd-parent-kind-interface"><span class="tsd-kind-icon">Constructor</span></li>
<li class="tsd-kind-property tsd-parent-kind-interface"><span class="tsd-kind-icon">Property</span></li>
<li class="tsd-kind-method tsd-parent-kind-interface"><span class="tsd-kind-icon">Method</span></li>
<li class="tsd-kind-index-signature tsd-parent-kind-interface"><span class="tsd-kind-icon">Index signature</span></li>
</ul>
<ul class="tsd-legend">
<li class="tsd-kind-class"><span class="tsd-kind-icon">Class</span></li>
<li class="tsd-kind-class tsd-has-type-parameter"><span class="tsd-kind-icon">Class with type parameter</span></li>
<li class="tsd-kind-constructor tsd-parent-kind-class"><span class="tsd-kind-icon">Constructor</span></li>
<li class="tsd-kind-property tsd-parent-kind-class"><span class="tsd-kind-icon">Property</span></li>
<li class="tsd-kind-method tsd-parent-kind-class"><span class="tsd-kind-icon">Method</span></li>
<li class="tsd-kind-accessor tsd-parent-kind-class"><span class="tsd-kind-icon">Accessor</span></li>
<li class="tsd-kind-index-signature tsd-parent-kind-class"><span class="tsd-kind-icon">Index signature</span></li>
</ul>
<ul class="tsd-legend">
<li class="tsd-kind-constructor tsd-parent-kind-class tsd-is-inherited"><span class="tsd-kind-icon">Inherited constructor</span></li>
<li class="tsd-kind-property tsd-parent-kind-class tsd-is-inherited"><span class="tsd-kind-icon">Inherited property</span></li>
<li class="tsd-kind-method tsd-parent-kind-class tsd-is-inherited"><span class="tsd-kind-icon">Inherited method</span></li>
<li class="tsd-kind-accessor tsd-parent-kind-class tsd-is-inherited"><span class="tsd-kind-icon">Inherited accessor</span></li>
</ul>
<ul class="tsd-legend">
<li class="tsd-kind-property tsd-parent-kind-class tsd-is-protected"><span class="tsd-kind-icon">Protected property</span></li>
<li class="tsd-kind-method tsd-parent-kind-class tsd-is-protected"><span class="tsd-kind-icon">Protected method</span></li>
<li class="tsd-kind-accessor tsd-parent-kind-class tsd-is-protected"><span class="tsd-kind-icon">Protected accessor</span></li>
</ul>
<ul class="tsd-legend">
<li class="tsd-kind-property tsd-parent-kind-class tsd-is-private"><span class="tsd-kind-icon">Private property</span></li>
<li class="tsd-kind-method tsd-parent-kind-class tsd-is-private"><span class="tsd-kind-icon">Private method</span></li>
<li class="tsd-kind-accessor tsd-parent-kind-class tsd-is-private"><span class="tsd-kind-icon">Private accessor</span></li>
</ul>
<ul class="tsd-legend">
<li class="tsd-kind-property tsd-parent-kind-class tsd-is-static"><span class="tsd-kind-icon">Static property</span></li>
<li class="tsd-kind-call-signature tsd-parent-kind-class tsd-is-static"><span class="tsd-kind-icon">Static method</span></li>
</ul>
</div>
</div>
</footer>
<div class="container tsd-generator">
<p>Generated using <a href="https://typedoc.org/" target="_blank">TypeDoc</a></p>
</div>
<div class="overlay"></div>
<script src="../assets/js/main.js"></script>
<script>if (location.protocol == 'file:') document.write('<script src="../assets/js/search.js"><' + '/script>');</script>
</body>
</html> | {
"pile_set_name": "Github"
} |
<?xml version="1.0" standalone="no"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:exsl="http://exslt.org/common"
xmlns:xlink="http://www.w3.org/1999/xlink">
<xsl:output method="html"/>
<xsl:param name="PERI_COL_OPB" select="'#339900'"/>
<xsl:param name="PERI_COL_WHIT" select="'#FFFFFF'"/>
<xsl:param name="PERI_COL_INFO" select="'#2233FF'"/>
<xsl:param name="PERI_COL_BLCK" select="'#000000'"/>
<xsl:param name="PERI_COL_GREY" select="'#CCCCCC'"/>
<xsl:param name="PERI_COL_XPRP" select="'#810017'"/>
<xsl:param name="PERI_COL_DOCLNK" select="'#FF9900'"/>
<!-- ======================= MAIN PERIPHERAL SECTION =============================== -->
<xsl:template name="Layout_Peripherals">
<BR></BR>
<BR></BR>
<BR></BR>
<BR></BR>
<A name="_{@INSTANCE}"/>
<SPAN style="color:{$DS_COL_BLCK}; font: bold italic 12px Verdana,Arial,Helvetica,sans-serif">
<xsl:value-of select="@INSTANCE"/>
<BR></BR>
________________________________________________
</SPAN>
<BR></BR>
<BR></BR>
<TABLE BGCOLOR="{$PERI_COL_WHIT}" WIDTH="800" COLS="2" cellspacing="0" cellpadding="0" border="0">
<!-- Layout the Module information table-->
<TD COLSPAN="1" width="50%" align="LEFT" valign="TOP">
<TABLE BGCOLOR="{$PERI_COL_WHIT}" WIDTH="400" COLS="2" cellspacing="0" cellpadding="0" border="0">
<TD COLSPAN="1" width="50%" align="MIDDLE" valign="TOP">
<IMG SRC="imgs/{@INSTANCE}.jpg" alt="{@INSTANCE} IP Image" border="0" vspace="10" hspace="0"/>
</TD>
<TR></TR>
<TD COLSPAN="1" width="50%" align="LEFT" valign="TOP">
<xsl:call-template name="Peri_PinoutTable"/>
</TD>
</TABLE>
</TD>
<TD COLSPAN="1" width="50%" align="RIGHT" valign="BOTTOM">
<xsl:call-template name="Peri_InfoTable"/>
</TD>
</TABLE>
</xsl:template>
<!-- ======================= PERIHERAL TABLE PARTS =============================== -->
<!-- Layout the Module's Information table -->
<xsl:template name="Peri_InfoTable">
<TABLE BGCOLOR="{$PERI_COL_BLCK}" WIDTH="410" COLS="5" cellspacing="1" cellpadding="2" border="0">
<TH COLSPAN="5" width="100%" align="middle" bgcolor="{$PERI_COL_XPRP}"><SPAN style="color:{$PERI_COL_WHIT}; font: bold 12px Verdana,Arial,Helvetica,sans-serif">General</SPAN></TH>
<TR></TR>
<TH COLSPAN="3" width="60%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Type</SPAN></TH>
<TH COLSPAN="2" width="40%" align="middle" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><A HREF="docs/{@MODTYPE}.pdf" style="text-decoration:none; color:{$PERI_COL_XPRP}"><xsl:value-of select="@MODTYPE"/></A></SPAN></TH>
<TR></TR>
<TH COLSPAN="3" width="60%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Version</SPAN></TH>
<TH COLSPAN="2" width="40%" align="middle" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@HWVERSION"/></SPAN></TH>
<TR></TR>
<TH COLSPAN="5" width="100%" align="middle" bgcolor="{$PERI_COL_XPRP}"><SPAN style="color:{$PERI_COL_WHIT}; font: bold 12px Verdana,Arial,Helvetica,sans-serif">Parameters</SPAN></TH>
<TR></TR>
<TH COLSPAN="5" width="100%" align="left" bgcolor="{$PERI_COL_WHIT}">
<SPAN style="color:{$PERI_COL_INFO}; font: bold 9px Verdana,Arial,Helvetica,sans-serif">
The paramaters listed here are only those set in the MHS file. Refer to the IP
<A HREF="docs/{@MODTYPE}.pdf" style="text-decoration:none; color:{$PERI_COL_XPRP}"> documentation </A>for complete information about module parameters.
</SPAN>
</TH>
<TR></TR>
<TH COLSPAN="3" width="60%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Name</SPAN></TH>
<TH COLSPAN="2" width="40%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Value</SPAN></TH>
<xsl:for-each select="PARAMETER">
<TR></TR>
<TH COLSPAN="3" width="60%" align="left" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@NAME"/></SPAN></TH>
<TH COLSPAN="2" width="40%" align="left" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@VALUE"/></SPAN></TH>
</xsl:for-each>
<TR></TR>
<TH COLSPAN="5" width="100%" align="middle" bgcolor="{$PERI_COL_XPRP}"><SPAN style="color:{$PERI_COL_WHIT}; font: bold 12px Verdana,Arial,Helvetica,sans-serif">Device Utilization</SPAN></TH>
<TR></TR>
<xsl:choose>
<xsl:when test="not(RESOURCES)">
<TH COLSPAN="5" width="100%" align="middle" bgcolor="{$PERI_COL_WHIT}">
<SPAN style="color:{$PERI_COL_INFO}; font: bold 9px Verdana,Arial,Helvetica,sans-serif">
Device utilization information is not available for this IP.
</SPAN>
</TH>
</xsl:when>
<xsl:otherwise>
<TH COLSPAN="2" width="55%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Resource Type</SPAN></TH>
<TH COLSPAN="1" width="15%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Used</SPAN></TH>
<TH COLSPAN="1" width="15%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Available</SPAN></TH>
<TH COLSPAN="1" width="15%" align="middle" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">Percent</SPAN></TH>
<xsl:for-each select="RESOURCES/RESOURCE">
<TR></TR>
<TH COLSPAN="2" width="55%" align="left" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@TYPE"/></SPAN></TH>
<TH COLSPAN="1" width="15%" align="middle" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@USED"/></SPAN></TH>
<TH COLSPAN="1" width="15%" align="middle" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@TOTAL"/></SPAN></TH>
<TH COLSPAN="1" width="15%" align="middle" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@PERCENT"/></SPAN></TH>
</xsl:for-each>
</xsl:otherwise>
</xsl:choose>
<TR></TR>
<TH COLSPAN="5" width="100%" align="middle" bgcolor="{$PERI_COL_XPRP}"><SPAN style="color:{$PERI_COL_WHIT}; font: bold 12px Verdana,Arial,Helvetica,sans-serif"></SPAN></TH>
</TABLE>
</xsl:template>
<!-- Layout the Module's pinout table -->
<xsl:template name="Peri_PinoutTable">
<TABLE BGCOLOR="{$PERI_COL_BLCK}" WIDTH="310" COLS="6" cellspacing="1" cellpadding="2" border="0">
<TH COLSPAN="6" width="100%" align="middle" bgcolor="{$PERI_COL_XPRP}"><SPAN style="color:{$PERI_COL_WHIT}; font: bold 9px Verdana,Arial,Helvetica,sans-serif">PINOUT</SPAN></TH>
<TR></TR>
<TH COLSPAN="6" width="100%" align="left" bgcolor="{$PERI_COL_WHIT}">
<SPAN style="color:{$PERI_COL_INFO}; font: bold 9px Verdana,Arial,Helvetica,sans-serif">
The ports listed here are only those connected in the MHS file. Refer to the IP
<A HREF="docs/{@MODTYPE}.pdf" style="text-decoration:none; color:{$PERI_COL_XPRP}"> documentation </A>for complete information about module ports.
</SPAN>
</TH>
<TR></TR>
<TH COLSPAN="1" width="5%" align="left" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">#</SPAN></TH>
<TH COLSPAN="2" width="25%" align="left" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">NAME</SPAN></TH>
<TH COLSPAN="1" width="10%" align="left" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">DIR</SPAN></TH>
<TH COLSPAN="2" width="60%" align="left" bgcolor="{$PERI_COL_GREY}"><SPAN style="color:{$PERI_COL_XPRP}; font: bold 10px Verdana,Arial,Helvetica,sans-serif">SIGNAL</SPAN></TH>
<xsl:for-each select="PORT[(not(@SIGNAME = '__DEF__') and not(@SIGNAME = '__NOC__'))]">
<xsl:sort data-type="number" select="@INDEX" order="ascending"/>
<TR></TR>
<TH COLSPAN="1" width="5%" align="left" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@INDEX"/></SPAN></TH>
<TH COLSPAN="2" width="25%" align="left" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@NAME"/></SPAN></TH>
<TH COLSPAN="1" width="10%" align="left" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@DIR"/></SPAN></TH>
<TH COLSPAN="2" width="60%" align="left" bgcolor="{$PERI_COL_WHIT}"><SPAN style="color:{$PERI_COL_BLCK}; font: bold 10px Verdana,Arial,Helvetica,sans-serif"><xsl:value-of select="@SIGNAME"/></SPAN></TH>
</xsl:for-each>
</TABLE>
</xsl:template>
</xsl:stylesheet>
| {
"pile_set_name": "Github"
} |
#!/usr/bin/env bash
# Copyright 2015 Johns Hopkins University (author: Vijayaditya Peddinti)
# Apache 2.0
# This script downloads the impulse responses and noise files from the
# Reverb2014 challenge
# and converts them to wav files with the required sampling rate
#==============================================
download=true
sampling_rate=8k
output_bit=16
DBname=RVB2014
file_splitter= #script to generate job scripts given the command file
. ./cmd.sh
. ./path.sh
. ./utils/parse_options.sh
if [ $# != 3 ]; then
echo "Usage: "
echo " $0 [options] <rir-home> <output-dir> <log-dir>"
echo "e.g.:"
echo " $0 --download true db/RIR_databases/ data/impulses_noises exp/make_reverb/log"
exit 1;
fi
RIR_home=$1
output_dir=$2
log_dir=$3
if [ "$download" = true ]; then
mkdir -p $RIR_home
(cd $RIR_home;
rm -rf reverb_tools*.tgz
wget http://reverb2014.dereverberation.com/tools/reverb_tools_for_Generate_mcTrainData.tgz || exit 1;
tar -zxvf reverb_tools_for_Generate_mcTrainData.tgz
wget http://reverb2014.dereverberation.com/tools/reverb_tools_for_Generate_SimData.tgz || exit 1;
tar -zxvf reverb_tools_for_Generate_SimData.tgz >/dev/null
)
fi
Reverb2014_home1=$RIR_home/reverb_tools_for_Generate_mcTrainData
Reverb2014_home2=$RIR_home/reverb_tools_for_Generate_SimData
# Reverb2014 RIRs and noise
#--------------------------
# data is stored as multi-channel wav-files
command_file=$log_dir/${DBname}_read_rir_noise.sh
echo "">$command_file
# Simdata for training
#--------------------
type_num=1
data_files=( $(find $Reverb2014_home1/RIR -name '*.wav' -type f -print || exit -1) )
files_done=0
total_files=$(echo ${data_files[@]}|wc -w)
echo "" > $log_dir/${DBname}_type${type_num}.rir.list
echo "Found $total_files impulse responses in ${Reverb2014_home1}/RIR."
for data_file in ${data_files[@]}; do
output_file_name=${DBname}_type${type_num}_`basename $data_file | tr '[:upper:]' '[:lower:]'`
echo "sox -t wav $data_file -t wav -r $sampling_rate -e signed-integer -b $output_bit ${output_dir}/${output_file_name}" >> $command_file
echo ${output_dir}/${output_file_name} >> $log_dir/${DBname}_type${type_num}.rir.list
files_done=$((files_done + 1))
done
data_files=( $(find $Reverb2014_home1/NOISE -name '*.wav' -type f -print || exit -1) )
files_done=0
total_files=$(echo ${data_files[@]}|wc -w)
echo "" > $log_dir/${DBname}_type${type_num}.noise.list
echo "Found $total_files noises in ${Reverb2014_home1}/NOISE."
for data_file in ${data_files[@]}; do
output_file_name=${DBname}_type${type_num}_`basename $data_file| tr '[:upper:]' '[:lower:]'`
echo "sox -t wav $data_file -t wav -r $sampling_rate -e signed-integer -b $output_bit ${output_dir}/${output_file_name}" >> $command_file
echo ${output_dir}/${output_file_name} >> $log_dir/${DBname}_type${type_num}.noise.list
files_done=$((files_done + 1))
done
# Simdata for devset
type_num=$((type_num + 1))
data_files=( $(find $Reverb2014_home2/RIR -name '*.wav' -type f -print || exit -1) )
files_done=0
total_files=$(echo ${data_files[@]}|wc -w)
echo "" > $log_dir/${DBname}_type${type_num}.rir.list
echo "Found $total_files impulse responses in ${Reverb2014_home2}/RIR."
for data_file in ${data_files[@]}; do
output_file_name=${DBname}_type${type_num}_`basename $data_file| tr '[:upper:]' '[:lower:]'`
echo "sox -t wav $data_file -t wav -r $sampling_rate -e signed-integer -b $output_bit ${output_dir}/${output_file_name}" >> $command_file
echo ${output_dir}/${output_file_name} >> $log_dir/${DBname}_type${type_num}.rir.list
files_done=$((files_done + 1))
done
data_files=( $(find $Reverb2014_home2/NOISE -name '*.wav' -type f -print || exit -1) )
files_done=0
total_files=$(echo ${data_files[@]}|wc -w)
echo "" > $log_dir/${DBname}_type${type_num}.noise.list
echo "Found $total_files noises in ${Reverb2014_home2}/NOISE."
for data_file in ${data_files[@]}; do
output_file_name=${DBname}_type${type_num}_`basename $data_file | tr '[:upper:]' '[:lower:]'`
echo "sox -t wav $data_file -t wav -r $sampling_rate -e signed-integer -b $output_bit ${output_dir}/${output_file_name}" >> $command_file
echo ${output_dir}/${output_file_name} >> $log_dir/${DBname}_type${type_num}.noise.list
files_done=$((files_done + 1))
done
if [ ! -z "$file_splitter" ]; then
num_jobs=$($file_splitter $command_file || exit 1)
job_file=${command_file%.sh}.JOB.sh
job_log=${command_file%.sh}.JOB.log
else
num_jobs=1
job_file=$command_file
job_log=${command_file%.sh}.log
fi
# execute the commands using the above created array jobs
time $decode_cmd --max-jobs-run 40 JOB=1:$num_jobs $job_log \
sh $job_file || exit 1;
# get the Reverb2014 room names to pair the noises and impulse responses
for type_num in `seq 1 2`; do
noise_patterns=( $(ls ${output_dir}/${DBname}_type${type_num}_noise*.wav | xargs -n1 basename | python -c"
import sys
for line in sys.stdin:
name = line.split('${DBname}_type${type_num}_noise_')[1]
print name.split('_')[0]
"|sort -u) )
for noise_pattern in ${noise_patterns[@]}; do
set_file=$output_dir/info/noise_impulse_${DBname}_$noise_pattern
echo -n "noise_files =" > ${set_file}
ls ${output_dir}/${DBname}_type${type_num}_noise*${noise_pattern}*.wav | awk '{ ORS=" "; print;} END{print "\n"}' >> ${set_file}
echo -n "impulse_files =" >> ${set_file}
ls ${output_dir}/${DBname}_type${type_num}_rir*${noise_pattern}*.wav | awk '{ ORS=" "; print; } END{print "\n"}' >> ${set_file}
done
done
| {
"pile_set_name": "Github"
} |
FILE(REMOVE_RECURSE
"../msg_gen"
"../msg_gen"
"../src/quadrotor_msgs/msg"
"CMakeFiles/rospack_genmsg_libexe"
)
# Per-language clean rules from dependency scanning.
FOREACH(lang)
INCLUDE(CMakeFiles/rospack_genmsg_libexe.dir/cmake_clean_${lang}.cmake OPTIONAL)
ENDFOREACH(lang)
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="13771" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" launchScreen="YES" useTraitCollections="YES" colorMatched="YES" initialViewController="01J-lp-oVM">
<device id="retina4_7" orientation="portrait">
<adaptation id="fullscreen"/>
</device>
<dependencies>
<deployment identifier="iOS"/>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="13772"/>
<capability name="Aspect ratio constraints" minToolsVersion="5.1"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="EHf-IW-A2E">
<objects>
<viewController id="01J-lp-oVM" sceneMemberID="viewController">
<layoutGuides>
<viewControllerLayoutGuide type="top" id="Llm-lL-Icb"/>
<viewControllerLayoutGuide type="bottom" id="xb3-aO-Qok"/>
</layoutGuides>
<view key="view" contentMode="scaleToFill" id="Ze5-6b-2t3">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<imageView userInteractionEnabled="NO" contentMode="scaleToFill" horizontalHuggingPriority="251" verticalHuggingPriority="251" misplaced="YES" image="Background" translatesAutoresizingMaskIntoConstraints="NO" id="1Ps-5a-wQE">
<rect key="frame" x="0.0" y="0.0" width="414" height="736"/>
</imageView>
<imageView userInteractionEnabled="NO" contentMode="scaleToFill" horizontalHuggingPriority="251" verticalHuggingPriority="251" misplaced="YES" image="Logo" translatesAutoresizingMaskIntoConstraints="NO" id="hCy-Sg-N4N">
<rect key="frame" x="36" y="240" width="341" height="256"/>
<constraints>
<constraint firstAttribute="width" secondItem="hCy-Sg-N4N" secondAttribute="height" multiplier="341:256" id="7bg-Pd-V4M"/>
</constraints>
</imageView>
</subviews>
<color key="backgroundColor" red="0.90196078431372551" green="0.90196078431372551" blue="0.90196078431372551" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<constraints>
<constraint firstItem="1Ps-5a-wQE" firstAttribute="centerX" secondItem="hCy-Sg-N4N" secondAttribute="centerX" id="0OI-2L-cEK"/>
<constraint firstItem="1Ps-5a-wQE" firstAttribute="bottom" secondItem="xb3-aO-Qok" secondAttribute="top" id="MQO-GI-am3"/>
<constraint firstItem="1Ps-5a-wQE" firstAttribute="width" secondItem="Ze5-6b-2t3" secondAttribute="width" id="dq4-Qv-rDP"/>
<constraint firstItem="hCy-Sg-N4N" firstAttribute="centerY" secondItem="Ze5-6b-2t3" secondAttribute="centerY" id="hF8-3a-UTD"/>
<constraint firstItem="1Ps-5a-wQE" firstAttribute="height" secondItem="Ze5-6b-2t3" secondAttribute="height" id="hsl-L6-dMb"/>
<constraint firstItem="hCy-Sg-N4N" firstAttribute="centerX" secondItem="Ze5-6b-2t3" secondAttribute="centerX" id="neI-al-F3p"/>
</constraints>
</view>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="iYj-Kq-Ea1" userLabel="First Responder" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="52" y="374"/>
</scene>
</scenes>
<resources>
<image name="Background" width="736" height="736"/>
<image name="Logo" width="341" height="256"/>
</resources>
</document>
| {
"pile_set_name": "Github"
} |
package com.tencent.mm.ui.account;
import com.tencent.mm.ui.base.MMAutoSwitchEditTextView.b;
final class EmailVerifyUI$2
implements MMAutoSwitchEditTextView.b
{
EmailVerifyUI$2(EmailVerifyUI paramEmailVerifyUI) {}
public final void bgL()
{
kRl.bp(false);
}
}
/* Location:
* Qualified Name: com.tencent.mm.ui.account.EmailVerifyUI.2
* Java Class Version: 6 (50.0)
* JD-Core Version: 0.7.1
*/ | {
"pile_set_name": "Github"
} |
/* Test of <sys/times.h> substitute.
Copyright (C) 2008-2020 Free Software Foundation, Inc.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. */
/* Written by Simon Josefsson <simon@josefsson.org>, 2008. */
#include <config.h>
#include <sys/times.h>
static struct tms tms;
int
main (void)
{
clock_t t = tms.tms_utime + tms.tms_stime + tms.tms_cutime + tms.tms_cstime;
return t;
}
| {
"pile_set_name": "Github"
} |
// Copyright (C) 2018-2019 Yixuan Qiu <yixuan.qiu@cos.name>
//
// This Source Code Form is subject to the terms of the Mozilla
// Public License v. 2.0. If a copy of the MPL was not distributed
// with this file, You can obtain one at https://mozilla.org/MPL/2.0/.
#ifndef ARNOLDI_H
#define ARNOLDI_H
#include <Eigen/Core>
#include <cmath> // std::sqrt
#include <stdexcept> // std::invalid_argument
#include <sstream> // std::stringstream
#include "../MatOp/internal/ArnoldiOp.h"
#include "../Util/TypeTraits.h"
#include "../Util/SimpleRandom.h"
#include "UpperHessenbergQR.h"
#include "DoubleShiftQR.h"
namespace Spectra {
// Arnoldi factorization A * V = V * H + f * e'
// A: n x n
// V: n x k
// H: k x k
// f: n x 1
// e: [0, ..., 0, 1]
// V and H are allocated of dimension m, so the maximum value of k is m
template <typename Scalar, typename ArnoldiOpType>
class Arnoldi
{
private:
typedef Eigen::Index Index;
typedef Eigen::Matrix<Scalar, Eigen::Dynamic, Eigen::Dynamic> Matrix;
typedef Eigen::Matrix<Scalar, Eigen::Dynamic, 1> Vector;
typedef Eigen::Map<Matrix> MapMat;
typedef Eigen::Map<Vector> MapVec;
typedef Eigen::Map<const Matrix> MapConstMat;
typedef Eigen::Map<const Vector> MapConstVec;
protected:
// clang-format off
ArnoldiOpType m_op; // Operators for the Arnoldi factorization
const Index m_n; // dimension of A
const Index m_m; // maximum dimension of subspace V
Index m_k; // current dimension of subspace V
Matrix m_fac_V; // V matrix in the Arnoldi factorization
Matrix m_fac_H; // H matrix in the Arnoldi factorization
Vector m_fac_f; // residual in the Arnoldi factorization
Scalar m_beta; // ||f||, B-norm of f
const Scalar m_near_0; // a very small value, but 1.0 / m_near_0 does not overflow
// ~= 1e-307 for the "double" type
const Scalar m_eps; // the machine precision, ~= 1e-16 for the "double" type
// clang-format on
// Given orthonormal basis functions V, find a nonzero vector f such that V'Bf = 0
// Assume that f has been properly allocated
void expand_basis(MapConstMat& V, const Index seed, Vector& f, Scalar& fnorm)
{
using std::sqrt;
const Scalar thresh = m_eps * sqrt(Scalar(m_n));
Vector Vf(V.cols());
for (Index iter = 0; iter < 5; iter++)
{
// Randomly generate a new vector and orthogonalize it against V
SimpleRandom<Scalar> rng(seed + 123 * iter);
f.noalias() = rng.random_vec(m_n);
// f <- f - V * V'Bf, so that f is orthogonal to V in B-norm
m_op.trans_product(V, f, Vf);
f.noalias() -= V * Vf;
// fnorm <- ||f||
fnorm = m_op.norm(f);
// If fnorm is too close to zero, we try a new random vector,
// otherwise return the result
if (fnorm >= thresh)
return;
}
}
public:
Arnoldi(const ArnoldiOpType& op, Index m) :
m_op(op), m_n(op.rows()), m_m(m), m_k(0),
m_near_0(TypeTraits<Scalar>::min() * Scalar(10)),
m_eps(Eigen::NumTraits<Scalar>::epsilon())
{}
virtual ~Arnoldi() {}
// Const-reference to internal structures
const Matrix& matrix_V() const { return m_fac_V; }
const Matrix& matrix_H() const { return m_fac_H; }
const Vector& vector_f() const { return m_fac_f; }
Scalar f_norm() const { return m_beta; }
Index subspace_dim() const { return m_k; }
// Initialize with an operator and an initial vector
void init(MapConstVec& v0, Index& op_counter)
{
m_fac_V.resize(m_n, m_m);
m_fac_H.resize(m_m, m_m);
m_fac_f.resize(m_n);
m_fac_H.setZero();
// Verify the initial vector
const Scalar v0norm = m_op.norm(v0);
if (v0norm < m_near_0)
throw std::invalid_argument("initial residual vector cannot be zero");
// Points to the first column of V
MapVec v(m_fac_V.data(), m_n);
// Normalize
v.noalias() = v0 / v0norm;
// Compute H and f
Vector w(m_n);
m_op.perform_op(v.data(), w.data());
op_counter++;
m_fac_H(0, 0) = m_op.inner_product(v, w);
m_fac_f.noalias() = w - v * m_fac_H(0, 0);
// In some cases f is zero in exact arithmetics, but due to rounding errors
// it may contain tiny fluctuations. When this happens, we force f to be zero
if (m_fac_f.cwiseAbs().maxCoeff() < m_eps)
{
m_fac_f.setZero();
m_beta = Scalar(0);
}
else
{
m_beta = m_op.norm(m_fac_f);
}
// Indicate that this is a step-1 factorization
m_k = 1;
}
// Arnoldi factorization starting from step-k
virtual void factorize_from(Index from_k, Index to_m, Index& op_counter)
{
using std::sqrt;
if (to_m <= from_k)
return;
if (from_k > m_k)
{
std::stringstream msg;
msg << "Arnoldi: from_k (= " << from_k << ") is larger than the current subspace dimension (= " << m_k << ")";
throw std::invalid_argument(msg.str());
}
const Scalar beta_thresh = m_eps * sqrt(Scalar(m_n));
// Pre-allocate vectors
Vector Vf(to_m);
Vector w(m_n);
// Keep the upperleft k x k submatrix of H and set other elements to 0
m_fac_H.rightCols(m_m - from_k).setZero();
m_fac_H.block(from_k, 0, m_m - from_k, from_k).setZero();
for (Index i = from_k; i <= to_m - 1; i++)
{
bool restart = false;
// If beta = 0, then the next V is not full rank
// We need to generate a new residual vector that is orthogonal
// to the current V, which we call a restart
if (m_beta < m_near_0)
{
MapConstMat V(m_fac_V.data(), m_n, i); // The first i columns
expand_basis(V, 2 * i, m_fac_f, m_beta);
restart = true;
}
// v <- f / ||f||
m_fac_V.col(i).noalias() = m_fac_f / m_beta; // The (i+1)-th column
// Note that H[i+1, i] equals to the unrestarted beta
m_fac_H(i, i - 1) = restart ? Scalar(0) : m_beta;
// w <- A * v, v = m_fac_V.col(i)
m_op.perform_op(&m_fac_V(0, i), w.data());
op_counter++;
const Index i1 = i + 1;
// First i+1 columns of V
MapConstMat Vs(m_fac_V.data(), m_n, i1);
// h = m_fac_H(0:i, i)
MapVec h(&m_fac_H(0, i), i1);
// h <- V'Bw
m_op.trans_product(Vs, w, h);
// f <- w - V * h
m_fac_f.noalias() = w - Vs * h;
m_beta = m_op.norm(m_fac_f);
if (m_beta > Scalar(0.717) * m_op.norm(h))
continue;
// f/||f|| is going to be the next column of V, so we need to test
// whether V'B(f/||f||) ~= 0
m_op.trans_product(Vs, m_fac_f, Vf.head(i1));
Scalar ortho_err = Vf.head(i1).cwiseAbs().maxCoeff();
// If not, iteratively correct the residual
int count = 0;
while (count < 5 && ortho_err > m_eps * m_beta)
{
// There is an edge case: when beta=||f|| is close to zero, f mostly consists
// of noises of rounding errors, so the test [ortho_err < eps * beta] is very
// likely to fail. In particular, if beta=0, then the test is ensured to fail.
// Hence when this happens, we force f to be zero, and then restart in the
// next iteration.
if (m_beta < beta_thresh)
{
m_fac_f.setZero();
m_beta = Scalar(0);
break;
}
// f <- f - V * Vf
m_fac_f.noalias() -= Vs * Vf.head(i1);
// h <- h + Vf
h.noalias() += Vf.head(i1);
// beta <- ||f||
m_beta = m_op.norm(m_fac_f);
m_op.trans_product(Vs, m_fac_f, Vf.head(i1));
ortho_err = Vf.head(i1).cwiseAbs().maxCoeff();
count++;
}
}
// Indicate that this is a step-m factorization
m_k = to_m;
}
// Apply H -> Q'HQ, where Q is from a double shift QR decomposition
void compress_H(const DoubleShiftQR<Scalar>& decomp)
{
decomp.matrix_QtHQ(m_fac_H);
m_k -= 2;
}
// Apply H -> Q'HQ, where Q is from an upper Hessenberg QR decomposition
void compress_H(const UpperHessenbergQR<Scalar>& decomp)
{
decomp.matrix_QtHQ(m_fac_H);
m_k--;
}
// Apply V -> VQ and compute the new f.
// Should be called after compress_H(), since m_k is updated there.
// Only need to update the first k+1 columns of V
// The first (m - k + i) elements of the i-th column of Q are non-zero,
// and the rest are zero
void compress_V(const Matrix& Q)
{
Matrix Vs(m_n, m_k + 1);
for (Index i = 0; i < m_k; i++)
{
const Index nnz = m_m - m_k + i + 1;
MapConstVec q(&Q(0, i), nnz);
Vs.col(i).noalias() = m_fac_V.leftCols(nnz) * q;
}
Vs.col(m_k).noalias() = m_fac_V * Q.col(m_k);
m_fac_V.leftCols(m_k + 1).noalias() = Vs;
Vector fk = m_fac_f * Q(m_m - 1, m_k - 1) + m_fac_V.col(m_k) * m_fac_H(m_k, m_k - 1);
m_fac_f.swap(fk);
m_beta = m_op.norm(m_fac_f);
}
};
} // namespace Spectra
#endif // ARNOLDI_H
| {
"pile_set_name": "Github"
} |
--- Lua-side storage for safe references to C++ objects.
--
-- Whenever a new character or item is initialized, a corresponding
-- handle will be created here to track the object's lifetime in an
-- isolated Lua environment managed by a C++ handle manager. If the
-- object is no longer valid for use (a character died, or an item was
-- destroyed), the C++ side will set the Lua side's handle to be
-- invalid. An error will be thrown on trying to access or write to
-- anything on an invalid handle. Since objects are identified by
-- UUIDs, it is possible to serialize references to C++ objects
-- relatively easily, allowing for serializing the state of any mods
-- that are in use along with the base save data. The usage of UUIDs
-- also allows checking equality and validity of objects, even long
-- after the C++ object the handle references has been removed.
--
-- Borrowed from https://eliasdaler.github.io/game-object-references/
local Handle = {}
-- Stores a map of handle.__uuid -> C++ object reference. These should not be
-- directly accessable outside this chunk.
-- Indexed by [class_name][uuid].
local refs = {}
-- Stores a map of raw pointer -> handle.
-- Indexed by [class_name][id].
local handles_by_pointer = {}
-- Cache for function closures resolved when indexing a handle.
-- Creating a new closure for every method lookup is expensive.
-- Indexed by [class_name][method_name].
local memoized_funcs = {}
local function handle_error(handle, key)
if _IS_TEST then
return
end
if ELONA and ELONA.require then
local GUI = ELONA.require("core.GUI")
GUI.txt_color(3)
GUI.txt("Error: handle is not valid! ")
if key ~= nil then
GUI.txt("Indexing: " .. tostring(key) .. " ")
end
GUI.txt("This means the character/item got removed. ")
GUI.txt_color(0)
end
if handle then
print("Error: handle is not valid! " .. handle.__kind .. ":" .. handle.__uuid)
else
print("Error: handle is not valid! " .. tostring(handle))
end
print(debug.traceback())
error("Error: handle is not valid!", 2)
end
function Handle.is_valid(handle)
return handle ~= nil and refs[handle.__kind][handle.__uuid] ~= nil
end
--- Create a metatable to be set on all handle tables of a given kind
--- ("LuaCharacter", "LuaItem") which will check for validity on
--- variable/method access.
local function generate_metatable(kind)
-- The userdata table is bound by sol2 as a global.
local userdata_table = _ENV[kind]
local mt = {}
memoized_funcs[kind] = {}
mt.__index = function(handle, key)
if key == "is_valid" then
-- workaround to avoid serializing is_valid function, since
-- serpent will refuse to load it safely
return Handle.is_valid
end
if not Handle.is_valid(handle)then
handle_error(handle, key)
end
-- Try to get a property out of the C++ reference.
local val = refs[kind][handle.__uuid][key]
if val ~= nil then
-- If the found property is a plain value, return it.
if type(val) ~= "function" then
return val
end
end
-- If that fails, try calling a function by the name given.
local f = memoized_funcs[kind][key]
if not f then
-- Look up the function on the usertype table generated by
-- sol2.
f = function(h, ...)
if type(h) ~= "table" or not h.__handle then
error("Please call this function using colon syntax (':').")
end
return userdata_table[key](refs[kind][h.__uuid], ...)
end
-- Cache it so we don't incur the overhead of creating a
-- closure on every lookup.
memoized_funcs[kind][key] = f
end
return f
end
mt.__newindex = function(handle, key, value)
if not Handle.is_valid(handle) then
handle_error(handle, key)
end
refs[kind][handle.__uuid][key] = value
end
mt.__eq = function(lhs, rhs)
return lhs.__kind == rhs.__kind and lhs.__uuid == rhs.__uuid
end
mt.__tostring = function(handle)
local ref = refs[handle.__kind][handle.__uuid]
if ref then
return userdata_table.__tostring(ref)
else
return "nil"
end
end
-- for serpent
mt.__serialize = function(handle)
return handle
end
refs[kind] = {}
handles_by_pointer[kind] = {}
return mt
end
local metatables = {}
metatables.LuaCharacter = generate_metatable("LuaCharacter")
metatables.LuaItem = generate_metatable("LuaItem")
--- Given a valid handle and kind, retrieves the underlying C++
--- userdata reference.
function Handle.get_ref(handle, kind)
if not Handle.is_valid(handle) then
handle_error(handle)
return nil
end
if not handle.__kind == kind then
print(debug.traceback())
error("Error: handle is of wrong type: wanted " .. kind .. ", got " .. handle.__kind)
return nil
end
return refs[kind][handle.__uuid]
end
function Handle.set_ref(handle, ref)
refs[handle.__kind][handle.__uuid] = ref
end
--- Gets a metatable for the lua type specified ("LuaItem",
--- "LuaCharacter")
function Handle.get_metatable(kind)
return metatables[kind]
end
--- Given a raw pointer of a C++ object and kind, retrieves the
--- handle that references it.
function Handle.get_handle(pointer, kind)
local handle = handles_by_pointer[kind][pointer]
if handle and handle.__kind ~= kind then
print(debug.traceback())
error("Error: handle is of wrong type: wanted " .. kind .. ", got " .. handle.__kind)
return nil
end
return handle
end
--- Creates a new handle by using a C++ raw pointer. The handle's pointer must
--- not be occupied by another handle, to prevent overwrites.
function Handle.create_handle(cpp_ref, pointer, kind, uuid)
if handles_by_pointer[kind][pointer] ~= nil then
print(handles_by_pointer[kind][pointer].__uuid)
error("Handle already exists: " .. kind .. ":" .. pointer, 2)
return nil
end
-- print("CREATE " .. kind .. " " .. pointer .. " " .. uuid)
local handle = {
__uuid = uuid,
__kind = kind,
__handle = true
}
setmetatable(handle, metatables[handle.__kind])
refs[handle.__kind][handle.__uuid] = cpp_ref
handles_by_pointer[handle.__kind][pointer] = handle
return handle
end
--- Removes an existing handle by using a C++ raw pointer.
--- It is acceptable if the handle doesn't already exist.
function Handle.remove_handle(cpp_ref, pointer, kind)
local handle = handles_by_pointer[kind][pointer]
if handle == nil then
return
end
-- print("REMOVE " .. pointer .. " " .. handle.__uuid)
assert(handle.__kind == kind)
refs[handle.__kind][handle.__uuid] = nil
handles_by_pointer[handle.__kind][pointer] = nil
end
--- Moves a handle from one raw pointer to another if it exists. If the handle
--- exists, the target slot must not be occupied. If not, the destination slot
--- will be set to empty as well.
function Handle.relocate_handle(cpp_ref, pointer, dest_cpp_ref, new_pointer, kind)
local handle = handles_by_pointer[kind][pointer]
if Handle.is_valid(handle) then
handles_by_pointer[kind][new_pointer] = handle
Handle.set_ref(handle, dest_cpp_ref)
else
-- When the handle is not valid, set the destination slot to be
-- invalid as well, to reflect relocating an empty handle.
-- This can happen when a temporary character is created as part
-- of change creature magic, as the temporary's state will be
-- empty at the time of relocation.
handles_by_pointer[kind][pointer] = nil
end
-- Clear the slot the handle was moved from.
handles_by_pointer[kind][pointer] = nil
end
--- Exchanges the positions of two handles and updates their __index
--- fields with the new values. Both handles must be valid.
function Handle.swap_handles(cpp_ref_a, cpp_ref_b, kind)
-- Do nothing.
end
-- Functions for deserialization. The steps are as follows.
-- 1. Deserialize mod data that contains the table of handles.
-- 2. Load handles onto the "handles_by_pointer" table using
-- "merge_handles".
-- 3. In C++, for each object loaded, add its reference to the "refs"
-- table using by looking up a newly inserted handle using the C++
-- raw pointer in "handles_by_pointer".
-- See also HandleManager::resolve_handle in C++.
function Handle.merge_handles(kind, obj_ids)
for pointer, obj_id in pairs(obj_ids) do
if obj_id ~= nil then
local handle = {
__uuid = obj_id,
__kind = kind,
__handle = true,
}
setmetatable(handle, metatables[kind])
handles_by_pointer[kind][pointer] = handle
end
end
end
function Handle.clear()
for kind, _ in pairs(refs) do
refs[kind] = {}
end
for kind, _ in pairs(handles_by_pointer) do
handles_by_pointer[kind] = {}
end
end
return Handle
| {
"pile_set_name": "Github"
} |
{
"id": "kk-jazz",
"name": "K.K. Jazz",
"category": "Music",
"games": {
"nl": {
"orderable": true,
"interiorThemes": [
"Trendy"
],
"sellPrice": {
"currency": "bells",
"value": 800
},
"sources": [
"K.K. Slider",
"T.I.Y."
],
"buyPrices": [
{
"currency": "bells",
"value": 3200
}
]
},
"nh": {
"orderable": true,
"sellPrice": {
"currency": "bells",
"value": 800
},
"buyPrices": [
{
"currency": "bells",
"value": 3200
}
]
}
}
} | {
"pile_set_name": "Github"
} |
/*
* Elonics E4000 tuner driver
*
* (C) 2011-2012 by Harald Welte <laforge@gnumonks.org>
* (C) 2012 by Sylvain Munaut <tnt@246tNt.com>
* (C) 2012 by Hoernchen <la@tfc-server.de>
*
* All Rights Reserved
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <limits.h>
#include <stdint.h>
#include <errno.h>
#include <string.h>
#include <stdio.h>
#include <reg_field.h>
#include <tuner_e4k.h>
#include <rtlsdr_i2c.h>
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
/* If this is defined, the limits are somewhat relaxed compared to what the
* vendor claims is possible */
#define OUT_OF_SPEC
#define MHZ(x) ((x)*1000*1000)
#define KHZ(x) ((x)*1000)
uint32_t unsigned_delta(uint32_t a, uint32_t b)
{
if (a > b)
return a - b;
else
return b - a;
}
/* look-up table bit-width -> mask */
static const uint8_t width2mask[] = {
0, 1, 3, 7, 0xf, 0x1f, 0x3f, 0x7f, 0xff
};
/***********************************************************************
* Register Access */
/*! \brief Write a register of the tuner chip
* \param[in] e4k reference to the tuner
* \param[in] reg number of the register
* \param[in] val value to be written
* \returns 0 on success, negative in case of error
*/
static int e4k_reg_write(struct e4k_state *e4k, uint8_t reg, uint8_t val)
{
int r;
uint8_t data[2];
data[0] = reg;
data[1] = val;
r = rtlsdr_i2c_write_fn(e4k->rtl_dev, e4k->i2c_addr, data, 2);
return r == 2 ? 0 : -1;
}
/*! \brief Read a register of the tuner chip
* \param[in] e4k reference to the tuner
* \param[in] reg number of the register
* \returns positive 8bit register contents on success, negative in case of error
*/
static int e4k_reg_read(struct e4k_state *e4k, uint8_t reg)
{
uint8_t data = reg;
if (rtlsdr_i2c_write_fn(e4k->rtl_dev, e4k->i2c_addr, &data, 1) < 1)
return -1;
if (rtlsdr_i2c_read_fn(e4k->rtl_dev, e4k->i2c_addr, &data, 1) < 1)
return -1;
return data;
}
/*! \brief Set or clear some (masked) bits inside a register
* \param[in] e4k reference to the tuner
* \param[in] reg number of the register
* \param[in] mask bit-mask of the value
* \param[in] val data value to be written to register
* \returns 0 on success, negative in case of error
*/
static int e4k_reg_set_mask(struct e4k_state *e4k, uint8_t reg,
uint8_t mask, uint8_t val)
{
uint8_t tmp = e4k_reg_read(e4k, reg);
if ((tmp & mask) == val)
return 0;
return e4k_reg_write(e4k, reg, (tmp & ~mask) | (val & mask));
}
/*! \brief Write a given field inside a register
* \param[in] e4k reference to the tuner
* \param[in] field structure describing the field
* \param[in] val value to be written
* \returns 0 on success, negative in case of error
*/
static int e4k_field_write(struct e4k_state *e4k, const struct reg_field *field, uint8_t val)
{
int rc;
uint8_t mask;
rc = e4k_reg_read(e4k, field->reg);
if (rc < 0)
return rc;
mask = width2mask[field->width] << field->shift;
return e4k_reg_set_mask(e4k, field->reg, mask, val << field->shift);
}
/*! \brief Read a given field inside a register
* \param[in] e4k reference to the tuner
* \param[in] field structure describing the field
* \returns positive value of the field, negative in case of error
*/
static int e4k_field_read(struct e4k_state *e4k, const struct reg_field *field)
{
int rc;
rc = e4k_reg_read(e4k, field->reg);
if (rc < 0)
return rc;
rc = (rc >> field->shift) & width2mask[field->width];
return rc;
}
/***********************************************************************
* Filter Control */
static const uint32_t rf_filt_center_uhf[] = {
MHZ(360), MHZ(380), MHZ(405), MHZ(425),
MHZ(450), MHZ(475), MHZ(505), MHZ(540),
MHZ(575), MHZ(615), MHZ(670), MHZ(720),
MHZ(760), MHZ(840), MHZ(890), MHZ(970)
};
static const uint32_t rf_filt_center_l[] = {
MHZ(1300), MHZ(1320), MHZ(1360), MHZ(1410),
MHZ(1445), MHZ(1460), MHZ(1490), MHZ(1530),
MHZ(1560), MHZ(1590), MHZ(1640), MHZ(1660),
MHZ(1680), MHZ(1700), MHZ(1720), MHZ(1750)
};
static int closest_arr_idx(const uint32_t *arr, unsigned int arr_size, uint32_t freq)
{
unsigned int i, bi = 0;
uint32_t best_delta = 0xffffffff;
/* iterate over the array containing a list of the center
* frequencies, selecting the closest one */
for (i = 0; i < arr_size; i++) {
uint32_t delta = unsigned_delta(freq, arr[i]);
if (delta < best_delta) {
best_delta = delta;
bi = i;
}
}
return bi;
}
/* return 4-bit index as to which RF filter to select */
static int choose_rf_filter(enum e4k_band band, uint32_t freq)
{
int rc;
switch (band) {
case E4K_BAND_VHF2:
case E4K_BAND_VHF3:
rc = 0;
break;
case E4K_BAND_UHF:
rc = closest_arr_idx(rf_filt_center_uhf,
ARRAY_SIZE(rf_filt_center_uhf),
freq);
break;
case E4K_BAND_L:
rc = closest_arr_idx(rf_filt_center_l,
ARRAY_SIZE(rf_filt_center_l),
freq);
break;
default:
rc = -EINVAL;
break;
}
return rc;
}
/* \brief Automatically select apropriate RF filter based on e4k state */
int e4k_rf_filter_set(struct e4k_state *e4k)
{
int rc;
rc = choose_rf_filter(e4k->band, e4k->vco.flo);
if (rc < 0)
return rc;
return e4k_reg_set_mask(e4k, E4K_REG_FILT1, 0xF, rc);
}
/* Mixer Filter */
static const uint32_t mix_filter_bw[] = {
KHZ(27000), KHZ(27000), KHZ(27000), KHZ(27000),
KHZ(27000), KHZ(27000), KHZ(27000), KHZ(27000),
KHZ(4600), KHZ(4200), KHZ(3800), KHZ(3400),
KHZ(3300), KHZ(2700), KHZ(2300), KHZ(1900)
};
/* IF RC Filter */
static const uint32_t ifrc_filter_bw[] = {
KHZ(21400), KHZ(21000), KHZ(17600), KHZ(14700),
KHZ(12400), KHZ(10600), KHZ(9000), KHZ(7700),
KHZ(6400), KHZ(5300), KHZ(4400), KHZ(3400),
KHZ(2600), KHZ(1800), KHZ(1200), KHZ(1000)
};
/* IF Channel Filter */
static const uint32_t ifch_filter_bw[] = {
KHZ(5500), KHZ(5300), KHZ(5000), KHZ(4800),
KHZ(4600), KHZ(4400), KHZ(4300), KHZ(4100),
KHZ(3900), KHZ(3800), KHZ(3700), KHZ(3600),
KHZ(3400), KHZ(3300), KHZ(3200), KHZ(3100),
KHZ(3000), KHZ(2950), KHZ(2900), KHZ(2800),
KHZ(2750), KHZ(2700), KHZ(2600), KHZ(2550),
KHZ(2500), KHZ(2450), KHZ(2400), KHZ(2300),
KHZ(2280), KHZ(2240), KHZ(2200), KHZ(2150)
};
static const uint32_t *if_filter_bw[] = {
mix_filter_bw,
ifch_filter_bw,
ifrc_filter_bw,
};
static const uint32_t if_filter_bw_len[] = {
ARRAY_SIZE(mix_filter_bw),
ARRAY_SIZE(ifch_filter_bw),
ARRAY_SIZE(ifrc_filter_bw),
};
static const struct reg_field if_filter_fields[] = {
{
E4K_REG_FILT2, 4, 4,
},
{
E4K_REG_FILT3, 0, 5,
},
{
E4K_REG_FILT2, 0, 4,
}
};
static int find_if_bw(enum e4k_if_filter filter, uint32_t bw)
{
if (filter >= ARRAY_SIZE(if_filter_bw))
return -EINVAL;
return closest_arr_idx(if_filter_bw[filter],
if_filter_bw_len[filter], bw);
}
/*! \brief Set the filter band-width of any of the IF filters
* \param[in] e4k reference to the tuner chip
* \param[in] filter filter to be configured
* \param[in] bandwidth bandwidth to be configured
* \returns positive actual filter band-width, negative in case of error
*/
int e4k_if_filter_bw_set(struct e4k_state *e4k, enum e4k_if_filter filter,
uint32_t bandwidth)
{
int bw_idx;
const struct reg_field *field;
if (filter >= ARRAY_SIZE(if_filter_bw))
return -EINVAL;
bw_idx = find_if_bw(filter, bandwidth);
field = &if_filter_fields[filter];
return e4k_field_write(e4k, field, bw_idx);
}
/*! \brief Enables / Disables the channel filter
* \param[in] e4k reference to the tuner chip
* \param[in] on 1=filter enabled, 0=filter disabled
* \returns 0 success, negative errors
*/
int e4k_if_filter_chan_enable(struct e4k_state *e4k, int on)
{
return e4k_reg_set_mask(e4k, E4K_REG_FILT3, E4K_FILT3_DISABLE,
on ? 0 : E4K_FILT3_DISABLE);
}
int e4k_if_filter_bw_get(struct e4k_state *e4k, enum e4k_if_filter filter)
{
const uint32_t *arr;
int rc;
const struct reg_field *field;
if (filter >= ARRAY_SIZE(if_filter_bw))
return -EINVAL;
field = &if_filter_fields[filter];
rc = e4k_field_read(e4k, field);
if (rc < 0)
return rc;
arr = if_filter_bw[filter];
return arr[rc];
}
/***********************************************************************
* Frequency Control */
#define E4K_FVCO_MIN_KHZ 2600000 /* 2.6 GHz */
#define E4K_FVCO_MAX_KHZ 3900000 /* 3.9 GHz */
#define E4K_PLL_Y 65536
#ifdef OUT_OF_SPEC
#define E4K_FLO_MIN_MHZ 50
#define E4K_FLO_MAX_MHZ 2200UL
#else
#define E4K_FLO_MIN_MHZ 64
#define E4K_FLO_MAX_MHZ 1700
#endif
struct pll_settings {
uint32_t freq;
uint8_t reg_synth7;
uint8_t mult;
};
static const struct pll_settings pll_vars[] = {
{KHZ(72400), (1 << 3) | 7, 48},
{KHZ(81200), (1 << 3) | 6, 40},
{KHZ(108300), (1 << 3) | 5, 32},
{KHZ(162500), (1 << 3) | 4, 24},
{KHZ(216600), (1 << 3) | 3, 16},
{KHZ(325000), (1 << 3) | 2, 12},
{KHZ(350000), (1 << 3) | 1, 8},
{KHZ(432000), (0 << 3) | 3, 8},
{KHZ(667000), (0 << 3) | 2, 6},
{KHZ(1200000), (0 << 3) | 1, 4}
};
static int is_fvco_valid(uint32_t fvco_z)
{
/* check if the resulting fosc is valid */
if (fvco_z/1000 < E4K_FVCO_MIN_KHZ ||
fvco_z/1000 > E4K_FVCO_MAX_KHZ) {
fprintf(stderr, "[E4K] Fvco %u invalid\n", fvco_z);
return 0;
}
return 1;
}
static int is_fosc_valid(uint32_t fosc)
{
if (fosc < MHZ(16) || fosc > MHZ(30)) {
fprintf(stderr, "[E4K] Fosc %u invalid\n", fosc);
return 0;
}
return 1;
}
static int is_z_valid(uint32_t z)
{
if (z > 255) {
fprintf(stderr, "[E4K] Z %u invalid\n", z);
return 0;
}
return 1;
}
/*! \brief Determine if 3-phase mixing shall be used or not */
static int use_3ph_mixing(uint32_t flo)
{
/* this is a magic number somewhre between VHF and UHF */
if (flo < MHZ(350))
return 1;
return 0;
}
/* \brief compute Fvco based on Fosc, Z and X
* \returns positive value (Fvco in Hz), 0 in case of error */
static uint64_t compute_fvco(uint32_t f_osc, uint8_t z, uint16_t x)
{
uint64_t fvco_z, fvco_x, fvco;
/* We use the following transformation in order to
* handle the fractional part with integer arithmetic:
* Fvco = Fosc * (Z + X/Y) <=> Fvco = Fosc * Z + (Fosc * X)/Y
* This avoids X/Y = 0. However, then we would overflow a 32bit
* integer, as we cannot hold e.g. 26 MHz * 65536 either.
*/
fvco_z = (uint64_t)f_osc * z;
#if 0
if (!is_fvco_valid(fvco_z))
return 0;
#endif
fvco_x = ((uint64_t)f_osc * x) / E4K_PLL_Y;
fvco = fvco_z + fvco_x;
return fvco;
}
static uint32_t compute_flo(uint32_t f_osc, uint8_t z, uint16_t x, uint8_t r)
{
uint64_t fvco = compute_fvco(f_osc, z, x);
if (fvco == 0)
return -EINVAL;
return fvco / r;
}
static int e4k_band_set(struct e4k_state *e4k, enum e4k_band band)
{
int rc;
switch (band) {
case E4K_BAND_VHF2:
case E4K_BAND_VHF3:
case E4K_BAND_UHF:
e4k_reg_write(e4k, E4K_REG_BIAS, 3);
break;
case E4K_BAND_L:
e4k_reg_write(e4k, E4K_REG_BIAS, 0);
break;
}
/* workaround: if we don't reset this register before writing to it,
* we get a gap between 325-350 MHz */
rc = e4k_reg_set_mask(e4k, E4K_REG_SYNTH1, 0x06, 0);
rc = e4k_reg_set_mask(e4k, E4K_REG_SYNTH1, 0x06, band << 1);
if (rc >= 0)
e4k->band = band;
return rc;
}
/*! \brief Compute PLL parameters for givent target frequency
* \param[out] oscp Oscillator parameters, if computation successful
* \param[in] fosc Clock input frequency applied to the chip (Hz)
* \param[in] intended_flo target tuning frequency (Hz)
* \returns actual PLL frequency, as close as possible to intended_flo,
* 0 in case of error
*/
uint32_t e4k_compute_pll_params(struct e4k_pll_params *oscp, uint32_t fosc, uint32_t intended_flo)
{
uint32_t i;
uint8_t r = 2;
uint64_t intended_fvco, remainder;
uint64_t z = 0;
uint32_t x;
int flo;
int three_phase_mixing = 0;
oscp->r_idx = 0;
if (!is_fosc_valid(fosc))
return 0;
for(i = 0; i < ARRAY_SIZE(pll_vars); ++i) {
if(intended_flo < pll_vars[i].freq) {
three_phase_mixing = (pll_vars[i].reg_synth7 & 0x08) ? 1 : 0;
oscp->r_idx = pll_vars[i].reg_synth7;
r = pll_vars[i].mult;
break;
}
}
//fprintf(stderr, "[E4K] Fint=%u, R=%u\n", intended_flo, r);
/* flo(max) = 1700MHz, R(max) = 48, we need 64bit! */
intended_fvco = (uint64_t)intended_flo * r;
/* compute integral component of multiplier */
z = intended_fvco / fosc;
/* compute fractional part. this will not overflow,
* as fosc(max) = 30MHz and z(max) = 255 */
remainder = intended_fvco - (fosc * z);
/* remainder(max) = 30MHz, E4K_PLL_Y = 65536 -> 64bit! */
x = (remainder * E4K_PLL_Y) / fosc;
/* x(max) as result of this computation is 65536 */
flo = compute_flo(fosc, z, x, r);
oscp->fosc = fosc;
oscp->flo = flo;
oscp->intended_flo = intended_flo;
oscp->r = r;
// oscp->r_idx = pll_vars[i].reg_synth7 & 0x0;
oscp->threephase = three_phase_mixing;
oscp->x = x;
oscp->z = z;
return flo;
}
int e4k_tune_params(struct e4k_state *e4k, struct e4k_pll_params *p)
{
/* program R + 3phase/2phase */
e4k_reg_write(e4k, E4K_REG_SYNTH7, p->r_idx);
/* program Z */
e4k_reg_write(e4k, E4K_REG_SYNTH3, p->z);
/* program X */
e4k_reg_write(e4k, E4K_REG_SYNTH4, p->x & 0xff);
e4k_reg_write(e4k, E4K_REG_SYNTH5, p->x >> 8);
/* we're in auto calibration mode, so there's no need to trigger it */
memcpy(&e4k->vco, p, sizeof(e4k->vco));
/* set the band */
if (e4k->vco.flo < MHZ(140))
e4k_band_set(e4k, E4K_BAND_VHF2);
else if (e4k->vco.flo < MHZ(350))
e4k_band_set(e4k, E4K_BAND_VHF3);
else if (e4k->vco.flo < MHZ(1135))
e4k_band_set(e4k, E4K_BAND_UHF);
else
e4k_band_set(e4k, E4K_BAND_L);
/* select and set proper RF filter */
e4k_rf_filter_set(e4k);
return e4k->vco.flo;
}
/*! \brief High-level tuning API, just specify frquency
*
* This function will compute matching PLL parameters, program them into the
* hardware and set the band as well as RF filter.
*
* \param[in] e4k reference to tuner
* \param[in] freq frequency in Hz
* \returns actual tuned frequency, negative in case of error
*/
int e4k_tune_freq(struct e4k_state *e4k, uint32_t freq)
{
uint32_t rc;
struct e4k_pll_params p;
/* determine PLL parameters */
rc = e4k_compute_pll_params(&p, e4k->vco.fosc, freq);
if (!rc)
return -EINVAL;
/* actually tune to those parameters */
rc = e4k_tune_params(e4k, &p);
/* check PLL lock */
rc = e4k_reg_read(e4k, E4K_REG_SYNTH1);
if (!(rc & 0x01)) {
fprintf(stderr, "[E4K] PLL not locked for %u Hz!\n", freq);
return -1;
}
return 0;
}
/***********************************************************************
* Gain Control */
static const int8_t if_stage1_gain[] = {
-3, 6
};
static const int8_t if_stage23_gain[] = {
0, 3, 6, 9
};
static const int8_t if_stage4_gain[] = {
0, 1, 2, 2
};
static const int8_t if_stage56_gain[] = {
3, 6, 9, 12, 15, 15, 15, 15
};
static const int8_t *if_stage_gain[] = {
0,
if_stage1_gain,
if_stage23_gain,
if_stage23_gain,
if_stage4_gain,
if_stage56_gain,
if_stage56_gain
};
static const uint8_t if_stage_gain_len[] = {
0,
ARRAY_SIZE(if_stage1_gain),
ARRAY_SIZE(if_stage23_gain),
ARRAY_SIZE(if_stage23_gain),
ARRAY_SIZE(if_stage4_gain),
ARRAY_SIZE(if_stage56_gain),
ARRAY_SIZE(if_stage56_gain)
};
static const struct reg_field if_stage_gain_regs[] = {
{ 0, 0, 0 },
{ E4K_REG_GAIN3, 0, 1 },
{ E4K_REG_GAIN3, 1, 2 },
{ E4K_REG_GAIN3, 3, 2 },
{ E4K_REG_GAIN3, 5, 2 },
{ E4K_REG_GAIN4, 0, 3 },
{ E4K_REG_GAIN4, 3, 3 }
};
static const int32_t lnagain[] = {
-50, 0,
-25, 1,
0, 4,
25, 5,
50, 6,
75, 7,
100, 8,
125, 9,
150, 10,
175, 11,
200, 12,
250, 13,
300, 14,
};
static const int32_t enhgain[] = {
10, 30, 50, 70
};
int e4k_set_lna_gain(struct e4k_state *e4k, int32_t gain)
{
uint32_t i;
for(i = 0; i < ARRAY_SIZE(lnagain)/2; ++i) {
if(lnagain[i*2] == gain) {
e4k_reg_set_mask(e4k, E4K_REG_GAIN1, 0xf, lnagain[i*2+1]);
return gain;
}
}
return -EINVAL;
}
int e4k_set_enh_gain(struct e4k_state *e4k, int32_t gain)
{
uint32_t i;
for(i = 0; i < ARRAY_SIZE(enhgain); ++i) {
if(enhgain[i] == gain) {
e4k_reg_set_mask(e4k, E4K_REG_AGC11, 0x7, E4K_AGC11_LNA_GAIN_ENH | (i << 1));
return gain;
}
}
e4k_reg_set_mask(e4k, E4K_REG_AGC11, 0x7, 0);
/* special case: 0 = off*/
if(0 == gain)
return 0;
else
return -EINVAL;
}
int e4k_enable_manual_gain(struct e4k_state *e4k, uint8_t manual)
{
if (manual) {
/* Set LNA mode to manual */
e4k_reg_set_mask(e4k, E4K_REG_AGC1, E4K_AGC1_MOD_MASK, E4K_AGC_MOD_SERIAL);
/* Set Mixer Gain Control to manual */
e4k_reg_set_mask(e4k, E4K_REG_AGC7, E4K_AGC7_MIX_GAIN_AUTO, 0);
} else {
/* Set LNA mode to auto */
e4k_reg_set_mask(e4k, E4K_REG_AGC1, E4K_AGC1_MOD_MASK, E4K_AGC_MOD_IF_SERIAL_LNA_AUTON);
/* Set Mixer Gain Control to auto */
e4k_reg_set_mask(e4k, E4K_REG_AGC7, E4K_AGC7_MIX_GAIN_AUTO, 1);
e4k_reg_set_mask(e4k, E4K_REG_AGC11, 0x7, 0);
}
return 0;
}
static int find_stage_gain(uint8_t stage, int8_t val)
{
const int8_t *arr;
int i;
if (stage >= ARRAY_SIZE(if_stage_gain))
return -EINVAL;
arr = if_stage_gain[stage];
for (i = 0; i < if_stage_gain_len[stage]; i++) {
if (arr[i] == val)
return i;
}
return -EINVAL;
}
/*! \brief Set the gain of one of the IF gain stages
* \param [e4k] handle to the tuner chip
* \param [stage] number of the stage (1..6)
* \param [value] gain value in dB
* \returns 0 on success, negative in case of error
*/
int e4k_if_gain_set(struct e4k_state *e4k, uint8_t stage, int8_t value)
{
int rc;
uint8_t mask;
const struct reg_field *field;
rc = find_stage_gain(stage, value);
if (rc < 0)
return rc;
/* compute the bit-mask for the given gain field */
field = &if_stage_gain_regs[stage];
mask = width2mask[field->width] << field->shift;
return e4k_reg_set_mask(e4k, field->reg, mask, rc << field->shift);
}
int e4k_mixer_gain_set(struct e4k_state *e4k, int8_t value)
{
uint8_t bit;
switch (value) {
case 4:
bit = 0;
break;
case 12:
bit = 1;
break;
default:
return -EINVAL;
}
return e4k_reg_set_mask(e4k, E4K_REG_GAIN2, 1, bit);
}
int e4k_commonmode_set(struct e4k_state *e4k, int8_t value)
{
if(value < 0)
return -EINVAL;
else if(value > 7)
return -EINVAL;
return e4k_reg_set_mask(e4k, E4K_REG_DC7, 7, value);
}
/***********************************************************************
* DC Offset */
int e4k_manual_dc_offset(struct e4k_state *e4k, int8_t iofs, int8_t irange, int8_t qofs, int8_t qrange)
{
int res;
if((iofs < 0x00) || (iofs > 0x3f))
return -EINVAL;
if((irange < 0x00) || (irange > 0x03))
return -EINVAL;
if((qofs < 0x00) || (qofs > 0x3f))
return -EINVAL;
if((qrange < 0x00) || (qrange > 0x03))
return -EINVAL;
res = e4k_reg_set_mask(e4k, E4K_REG_DC2, 0x3f, iofs);
if(res < 0)
return res;
res = e4k_reg_set_mask(e4k, E4K_REG_DC3, 0x3f, qofs);
if(res < 0)
return res;
res = e4k_reg_set_mask(e4k, E4K_REG_DC4, 0x33, (qrange << 4) | irange);
return res;
}
/*! \brief Perform a DC offset calibration right now
* \param [e4k] handle to the tuner chip
*/
int e4k_dc_offset_calibrate(struct e4k_state *e4k)
{
/* make sure the DC range detector is enabled */
e4k_reg_set_mask(e4k, E4K_REG_DC5, E4K_DC5_RANGE_DET_EN, E4K_DC5_RANGE_DET_EN);
return e4k_reg_write(e4k, E4K_REG_DC1, 0x01);
}
static const int8_t if_gains_max[] = {
0, 6, 9, 9, 2, 15, 15
};
struct gain_comb {
int8_t mixer_gain;
int8_t if1_gain;
uint8_t reg;
};
static const struct gain_comb dc_gain_comb[] = {
{ 4, -3, 0x50 },
{ 4, 6, 0x51 },
{ 12, -3, 0x52 },
{ 12, 6, 0x53 },
};
#define TO_LUT(offset, range) (offset | (range << 6))
int e4k_dc_offset_gen_table(struct e4k_state *e4k)
{
uint32_t i;
/* FIXME: read ont current gain values and write them back
* before returning to the caller */
/* disable auto mixer gain */
e4k_reg_set_mask(e4k, E4K_REG_AGC7, E4K_AGC7_MIX_GAIN_AUTO, 0);
/* set LNA/IF gain to full manual */
e4k_reg_set_mask(e4k, E4K_REG_AGC1, E4K_AGC1_MOD_MASK,
E4K_AGC_MOD_SERIAL);
/* set all 'other' gains to maximum */
for (i = 2; i <= 6; i++)
e4k_if_gain_set(e4k, i, if_gains_max[i]);
/* iterate over all mixer + if_stage_1 gain combinations */
for (i = 0; i < ARRAY_SIZE(dc_gain_comb); i++) {
uint8_t offs_i, offs_q, range, range_i, range_q;
/* set the combination of mixer / if1 gain */
e4k_mixer_gain_set(e4k, dc_gain_comb[i].mixer_gain);
e4k_if_gain_set(e4k, 1, dc_gain_comb[i].if1_gain);
/* perform actual calibration */
e4k_dc_offset_calibrate(e4k);
/* extract I/Q offset and range values */
offs_i = e4k_reg_read(e4k, E4K_REG_DC2) & 0x3f;
offs_q = e4k_reg_read(e4k, E4K_REG_DC3) & 0x3f;
range = e4k_reg_read(e4k, E4K_REG_DC4);
range_i = range & 0x3;
range_q = (range >> 4) & 0x3;
fprintf(stderr, "[E4K] Table %u I=%u/%u, Q=%u/%u\n",
i, range_i, offs_i, range_q, offs_q);
/* write into the table */
e4k_reg_write(e4k, dc_gain_comb[i].reg,
TO_LUT(offs_q, range_q));
e4k_reg_write(e4k, dc_gain_comb[i].reg + 0x10,
TO_LUT(offs_i, range_i));
}
return 0;
}
/***********************************************************************
* Standby */
/*! \brief Enable/disable standby mode
*/
int e4k_standby(struct e4k_state *e4k, int enable)
{
e4k_reg_set_mask(e4k, E4K_REG_MASTER1, E4K_MASTER1_NORM_STBY,
enable ? 0 : E4K_MASTER1_NORM_STBY);
return 0;
}
/***********************************************************************
* Initialization */
static int magic_init(struct e4k_state *e4k)
{
e4k_reg_write(e4k, 0x7e, 0x01);
e4k_reg_write(e4k, 0x7f, 0xfe);
e4k_reg_write(e4k, 0x82, 0x00);
e4k_reg_write(e4k, 0x86, 0x50); /* polarity A */
e4k_reg_write(e4k, 0x87, 0x20);
e4k_reg_write(e4k, 0x88, 0x01);
e4k_reg_write(e4k, 0x9f, 0x7f);
e4k_reg_write(e4k, 0xa0, 0x07);
return 0;
}
/*! \brief Initialize the E4K tuner
*/
int e4k_init(struct e4k_state *e4k)
{
/* make a dummy i2c read or write command, will not be ACKed! */
e4k_reg_read(e4k, 0);
/* Make sure we reset everything and clear POR indicator */
e4k_reg_write(e4k, E4K_REG_MASTER1,
E4K_MASTER1_RESET |
E4K_MASTER1_NORM_STBY |
E4K_MASTER1_POR_DET
);
/* Configure clock input */
e4k_reg_write(e4k, E4K_REG_CLK_INP, 0x00);
/* Disable clock output */
e4k_reg_write(e4k, E4K_REG_REF_CLK, 0x00);
e4k_reg_write(e4k, E4K_REG_CLKOUT_PWDN, 0x96);
/* Write some magic values into registers */
magic_init(e4k);
#if 0
/* Set common mode voltage a bit higher for more margin 850 mv */
e4k_commonmode_set(e4k, 4);
/* Initialize DC offset lookup tables */
e4k_dc_offset_gen_table(e4k);
/* Enable time variant DC correction */
e4k_reg_write(e4k, E4K_REG_DCTIME1, 0x01);
e4k_reg_write(e4k, E4K_REG_DCTIME2, 0x01);
#endif
/* Set LNA mode to manual */
e4k_reg_write(e4k, E4K_REG_AGC4, 0x10); /* High threshold */
e4k_reg_write(e4k, E4K_REG_AGC5, 0x04); /* Low threshold */
e4k_reg_write(e4k, E4K_REG_AGC6, 0x1a); /* LNA calib + loop rate */
e4k_reg_set_mask(e4k, E4K_REG_AGC1, E4K_AGC1_MOD_MASK,
E4K_AGC_MOD_SERIAL);
/* Set Mixer Gain Control to manual */
e4k_reg_set_mask(e4k, E4K_REG_AGC7, E4K_AGC7_MIX_GAIN_AUTO, 0);
#if 0
/* Enable LNA Gain enhancement */
e4k_reg_set_mask(e4k, E4K_REG_AGC11, 0x7,
E4K_AGC11_LNA_GAIN_ENH | (2 << 1));
/* Enable automatic IF gain mode switching */
e4k_reg_set_mask(e4k, E4K_REG_AGC8, 0x1, E4K_AGC8_SENS_LIN_AUTO);
#endif
/* Use auto-gain as default */
e4k_enable_manual_gain(e4k, 0);
/* Select moderate gain levels */
e4k_if_gain_set(e4k, 1, 6);
e4k_if_gain_set(e4k, 2, 0);
e4k_if_gain_set(e4k, 3, 0);
e4k_if_gain_set(e4k, 4, 0);
e4k_if_gain_set(e4k, 5, 9);
e4k_if_gain_set(e4k, 6, 9);
/* Set the most narrow filter we can possibly use */
e4k_if_filter_bw_set(e4k, E4K_IF_FILTER_MIX, KHZ(1900));
e4k_if_filter_bw_set(e4k, E4K_IF_FILTER_RC, KHZ(1000));
e4k_if_filter_bw_set(e4k, E4K_IF_FILTER_CHAN, KHZ(2150));
e4k_if_filter_chan_enable(e4k, 1);
/* Disable time variant DC correction and LUT */
e4k_reg_set_mask(e4k, E4K_REG_DC5, 0x03, 0);
e4k_reg_set_mask(e4k, E4K_REG_DCTIME1, 0x03, 0);
e4k_reg_set_mask(e4k, E4K_REG_DCTIME2, 0x03, 0);
return 0;
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2016 by Eugene Yokota
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package gigahorse
import java.io.{ File, FileOutputStream }
import scala.concurrent.Future
import scala.util.Try
object DownloadHandler {
/** Function from `StreamResponse` to `Future[File]` */
def asFile(file: File): StreamResponse => Future[File] = (response: StreamResponse) =>
{
val stream = response.byteBuffers
val out = new FileOutputStream(file).getChannel
stream.foldResource(file)((acc, bb) => {
out.write(bb)
acc
}, () => Try(out.close()))
}
}
| {
"pile_set_name": "Github"
} |
package quantile
import (
"math"
"math/rand"
"sort"
"testing"
)
var (
Targets = map[float64]float64{
0.01: 0.001,
0.10: 0.01,
0.50: 0.05,
0.90: 0.01,
0.99: 0.001,
}
TargetsSmallEpsilon = map[float64]float64{
0.01: 0.0001,
0.10: 0.001,
0.50: 0.005,
0.90: 0.001,
0.99: 0.0001,
}
LowQuantiles = []float64{0.01, 0.1, 0.5}
HighQuantiles = []float64{0.99, 0.9, 0.5}
)
const RelativeEpsilon = 0.01
func verifyPercsWithAbsoluteEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for quantile, epsilon := range Targets {
n := float64(len(a))
k := int(quantile * n)
if k < 1 {
k = 1
}
lower := int((quantile - epsilon) * n)
if lower < 1 {
lower = 1
}
upper := int(math.Ceil((quantile + epsilon) * n))
if upper > len(a) {
upper = len(a)
}
w, min, max := a[k-1], a[lower-1], a[upper-1]
if g := s.Query(quantile); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", quantile, w, min, max, g)
}
}
}
func verifyLowPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for _, qu := range LowQuantiles {
n := float64(len(a))
k := int(qu * n)
lowerRank := int((1 - RelativeEpsilon) * qu * n)
upperRank := int(math.Ceil((1 + RelativeEpsilon) * qu * n))
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
if g := s.Query(qu); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
}
}
}
func verifyHighPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for _, qu := range HighQuantiles {
n := float64(len(a))
k := int(qu * n)
lowerRank := int((1 - (1+RelativeEpsilon)*(1-qu)) * n)
upperRank := int(math.Ceil((1 - (1-RelativeEpsilon)*(1-qu)) * n))
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
if g := s.Query(qu); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
}
}
}
func populateStream(s *Stream) []float64 {
a := make([]float64, 0, 1e5+100)
for i := 0; i < cap(a); i++ {
v := rand.NormFloat64()
// Add 5% asymmetric outliers.
if i%20 == 0 {
v = v*v + 1
}
s.Insert(v)
a = append(a, v)
}
return a
}
func TestTargetedQuery(t *testing.T) {
rand.Seed(42)
s := NewTargeted(Targets)
a := populateStream(s)
verifyPercsWithAbsoluteEpsilon(t, a, s)
}
func TestTargetedQuerySmallSampleSize(t *testing.T) {
rand.Seed(42)
s := NewTargeted(TargetsSmallEpsilon)
a := []float64{1, 2, 3, 4, 5}
for _, v := range a {
s.Insert(v)
}
verifyPercsWithAbsoluteEpsilon(t, a, s)
// If not yet flushed, results should be precise:
if !s.flushed() {
for φ, want := range map[float64]float64{
0.01: 1,
0.10: 1,
0.50: 3,
0.90: 5,
0.99: 5,
} {
if got := s.Query(φ); got != want {
t.Errorf("want %f for φ=%f, got %f", want, φ, got)
}
}
}
}
func TestLowBiasedQuery(t *testing.T) {
rand.Seed(42)
s := NewLowBiased(RelativeEpsilon)
a := populateStream(s)
verifyLowPercsWithRelativeEpsilon(t, a, s)
}
func TestHighBiasedQuery(t *testing.T) {
rand.Seed(42)
s := NewHighBiased(RelativeEpsilon)
a := populateStream(s)
verifyHighPercsWithRelativeEpsilon(t, a, s)
}
// BrokenTestTargetedMerge is broken, see Merge doc comment.
func BrokenTestTargetedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewTargeted(Targets)
s2 := NewTargeted(Targets)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyPercsWithAbsoluteEpsilon(t, a, s1)
}
// BrokenTestLowBiasedMerge is broken, see Merge doc comment.
func BrokenTestLowBiasedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewLowBiased(RelativeEpsilon)
s2 := NewLowBiased(RelativeEpsilon)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyLowPercsWithRelativeEpsilon(t, a, s2)
}
// BrokenTestHighBiasedMerge is broken, see Merge doc comment.
func BrokenTestHighBiasedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewHighBiased(RelativeEpsilon)
s2 := NewHighBiased(RelativeEpsilon)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyHighPercsWithRelativeEpsilon(t, a, s2)
}
func TestUncompressed(t *testing.T) {
q := NewTargeted(Targets)
for i := 100; i > 0; i-- {
q.Insert(float64(i))
}
if g := q.Count(); g != 100 {
t.Errorf("want count 100, got %d", g)
}
// Before compression, Query should have 100% accuracy.
for quantile := range Targets {
w := quantile * 100
if g := q.Query(quantile); g != w {
t.Errorf("want %f, got %f", w, g)
}
}
}
func TestUncompressedSamples(t *testing.T) {
q := NewTargeted(map[float64]float64{0.99: 0.001})
for i := 1; i <= 100; i++ {
q.Insert(float64(i))
}
if g := q.Samples().Len(); g != 100 {
t.Errorf("want count 100, got %d", g)
}
}
func TestUncompressedOne(t *testing.T) {
q := NewTargeted(map[float64]float64{0.99: 0.01})
q.Insert(3.14)
if g := q.Query(0.90); g != 3.14 {
t.Error("want PI, got", g)
}
}
func TestDefaults(t *testing.T) {
if g := NewTargeted(map[float64]float64{0.99: 0.001}).Query(0.99); g != 0 {
t.Errorf("want 0, got %f", g)
}
}
| {
"pile_set_name": "Github"
} |
# CommNet model for bAbI tasks
This code is for training CommNet model on toy Q&A dataset [bAbI](http://fb.ai/babi), where one has to answer to a simple question after reading a short story. The model solves this problem by assigning each sentence of the story to separate agents, and let them communicate. After several steps of communication, agents produce a single answer. For more details, see our [paper](https://arxiv.org/abs/1605.07736).
## Usage
The code is written in Matlab. After downloading the code, go to the code directory and type `run` in Matlab. This will start training on the first task. To train on different task, change `task` variable in file `run.m`. The data directory contains older 10k version of tasks, but the latest version can be downloaded from [here](http://fb.ai/babi).
You can change model settings in `config_babi.m`. With the default configuration, we obtained the following result, which included in the paper.
Task | Test error (%)
-----|---------:
1: 1 supporting fact | 0.00
2: 2 supporting facts | 3.23
3: 3 supporting facts | 68.35
4: 2 argument relations | 0.00
5: 3 argument relations | 1.71
6: yes/no questions | 0.00
7: counting | 0.60
8: lists/sets | 0.50
9: simple negation | 0.00
10: indefinite knowledge | 0.00
11: basic coherence | 0.00
12: conjunction | 0.00
13: compound coherence |0.00
14: time reasoning | 0.00
15: basic deduction | 0.00
16: basic induction | 51.31
17: positional reasoning | 15.12
18: size reasoning | 1.41
19: path finding | 0.00
20: agent's motivation | 0.00
Mean | 7.11
failed tasks | 3
| {
"pile_set_name": "Github"
} |
/*
* UTF-8 (with BOM) English-EN text strings for login.sh html elements
*/
logS.LSect="Login";
logS.EAdmP="Enter Admin Password";
logS.YQot="Your Quota";
logS.NQot="Entire Network Quota";
logS.CTime="Current Date & Time";
logS.CIP="IP Address";
logS.CIPs="You are currently connected from:";
//javascript
logS.passErr="ERROR: You must enter a password";
logS.Lging="Logging In";
logS.SExp="Session Expired";
logS.InvP="Invalid Password";
logS.LOut="Logged Out";
logS.Qnam=["total up+down", "download", "upload" ];
logS.of="of";
logS.fQuo="for Quota";
logS.husd="has been used";
logS.qusd="quota has been used";
| {
"pile_set_name": "Github"
} |
/*
* Hibernate, Relational Persistence for Idiomatic Java
*
* License: GNU Lesser General Public License (LGPL), version 2.1 or later.
* See the lgpl.txt file in the root directory or <http://www.gnu.org/licenses/lgpl-2.1.html>.
*/
package org.hibernate.jpa.test.inheritance;
import javax.persistence.EntityManager;
import org.junit.Test;
import org.hibernate.jpa.test.BaseEntityManagerFunctionalTestCase;
import static org.junit.Assert.assertNotNull;
/**
* @author Emmanuel Bernard
*/
public class InheritanceTest extends BaseEntityManagerFunctionalTestCase {
@Test
public void testFind() throws Exception {
EntityManager firstSession = getOrCreateEntityManager( );
Strawberry u = new Strawberry();
u.setSize( 12l );
firstSession.getTransaction().begin();
firstSession.persist(u);
firstSession.getTransaction().commit();
Long newId = u.getId();
firstSession.clear();
firstSession.getTransaction().begin();
// 1.
Strawberry result1 = firstSession.find(Strawberry.class, newId);
assertNotNull( result1 );
// 2.
Strawberry result2 = (Strawberry) firstSession.find(Fruit.class, newId);
System.out.println("2. result is:" + result2);
firstSession.getTransaction().commit();
firstSession.close();
}
@Override
public Class[] getAnnotatedClasses() {
return new Class[] {
Fruit.class,
Strawberry.class
};
}
}
| {
"pile_set_name": "Github"
} |
module.exports =
/******/ (function(modules) { // webpackBootstrap
/******/ // The module cache
/******/ var installedModules = {};
/******/
/******/ // The require function
/******/ function __webpack_require__(moduleId) {
/******/
/******/ // Check if module is in cache
/******/ if(installedModules[moduleId]) {
/******/ return installedModules[moduleId].exports;
/******/ }
/******/ // Create a new module (and put it into the cache)
/******/ var module = installedModules[moduleId] = {
/******/ i: moduleId,
/******/ l: false,
/******/ exports: {}
/******/ };
/******/
/******/ // Execute the module function
/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ // Flag the module as loaded
/******/ module.l = true;
/******/
/******/ // Return the exports of the module
/******/ return module.exports;
/******/ }
/******/
/******/
/******/ // expose the modules object (__webpack_modules__)
/******/ __webpack_require__.m = modules;
/******/
/******/ // expose the module cache
/******/ __webpack_require__.c = installedModules;
/******/
/******/ // define getter function for harmony exports
/******/ __webpack_require__.d = function(exports, name, getter) {
/******/ if(!__webpack_require__.o(exports, name)) {
/******/ Object.defineProperty(exports, name, {
/******/ configurable: false,
/******/ enumerable: true,
/******/ get: getter
/******/ });
/******/ }
/******/ };
/******/
/******/ // getDefaultExport function for compatibility with non-harmony modules
/******/ __webpack_require__.n = function(module) {
/******/ var getter = module && module.__esModule ?
/******/ function getDefault() { return module['default']; } :
/******/ function getModuleExports() { return module; };
/******/ __webpack_require__.d(getter, 'a', getter);
/******/ return getter;
/******/ };
/******/
/******/ // Object.prototype.hasOwnProperty.call
/******/ __webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };
/******/
/******/ // __webpack_public_path__
/******/ __webpack_require__.p = "/dist/";
/******/
/******/ // Load entry module and return exports
/******/ return __webpack_require__(__webpack_require__.s = 261);
/******/ })
/************************************************************************/
/******/ ({
/***/ 0:
/***/ (function(module, exports) {
/* globals __VUE_SSR_CONTEXT__ */
// IMPORTANT: Do NOT use ES2015 features in this file.
// This module is a runtime utility for cleaner component module output and will
// be included in the final webpack user bundle.
module.exports = function normalizeComponent (
rawScriptExports,
compiledTemplate,
functionalTemplate,
injectStyles,
scopeId,
moduleIdentifier /* server only */
) {
var esModule
var scriptExports = rawScriptExports = rawScriptExports || {}
// ES6 modules interop
var type = typeof rawScriptExports.default
if (type === 'object' || type === 'function') {
esModule = rawScriptExports
scriptExports = rawScriptExports.default
}
// Vue.extend constructor export interop
var options = typeof scriptExports === 'function'
? scriptExports.options
: scriptExports
// render functions
if (compiledTemplate) {
options.render = compiledTemplate.render
options.staticRenderFns = compiledTemplate.staticRenderFns
options._compiled = true
}
// functional template
if (functionalTemplate) {
options.functional = true
}
// scopedId
if (scopeId) {
options._scopeId = scopeId
}
var hook
if (moduleIdentifier) { // server build
hook = function (context) {
// 2.3 injection
context =
context || // cached call
(this.$vnode && this.$vnode.ssrContext) || // stateful
(this.parent && this.parent.$vnode && this.parent.$vnode.ssrContext) // functional
// 2.2 with runInNewContext: true
if (!context && typeof __VUE_SSR_CONTEXT__ !== 'undefined') {
context = __VUE_SSR_CONTEXT__
}
// inject component styles
if (injectStyles) {
injectStyles.call(this, context)
}
// register component module identifier for async chunk inferrence
if (context && context._registeredComponents) {
context._registeredComponents.add(moduleIdentifier)
}
}
// used by ssr in case component is cached and beforeCreate
// never gets called
options._ssrRegister = hook
} else if (injectStyles) {
hook = injectStyles
}
if (hook) {
var functional = options.functional
var existing = functional
? options.render
: options.beforeCreate
if (!functional) {
// inject component registration as beforeCreate hook
options.beforeCreate = existing
? [].concat(existing, hook)
: [hook]
} else {
// for template-only hot-reload because in that case the render fn doesn't
// go through the normalizer
options._injectStyles = hook
// register for functioal component in vue file
options.render = function renderWithStyleInjection (h, context) {
hook.call(context)
return existing(h, context)
}
}
}
return {
esModule: esModule,
exports: scriptExports,
options: options
}
}
/***/ }),
/***/ 13:
/***/ (function(module, exports) {
module.exports = require("element-ui/lib/utils/popup");
/***/ }),
/***/ 20:
/***/ (function(module, exports) {
module.exports = require("element-ui/lib/utils/vdom");
/***/ }),
/***/ 261:
/***/ (function(module, exports, __webpack_require__) {
"use strict";
exports.__esModule = true;
var _main = __webpack_require__(262);
var _main2 = _interopRequireDefault(_main);
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }
exports.default = _main2.default;
/***/ }),
/***/ 262:
/***/ (function(module, exports, __webpack_require__) {
"use strict";
exports.__esModule = true;
var _vue = __webpack_require__(4);
var _vue2 = _interopRequireDefault(_vue);
var _main = __webpack_require__(263);
var _main2 = _interopRequireDefault(_main);
var _popup = __webpack_require__(13);
var _vdom = __webpack_require__(20);
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }
var NotificationConstructor = _vue2.default.extend(_main2.default);
var instance = void 0;
var instances = [];
var seed = 1;
var Notification = function Notification(options) {
if (_vue2.default.prototype.$isServer) return;
options = options || {};
var userOnClose = options.onClose;
var id = 'notification_' + seed++;
var position = options.position || 'top-right';
options.onClose = function () {
Notification.close(id, userOnClose);
};
instance = new NotificationConstructor({
data: options
});
if ((0, _vdom.isVNode)(options.message)) {
instance.$slots.default = [options.message];
options.message = 'REPLACED_BY_VNODE';
}
instance.id = id;
instance.$mount();
document.body.appendChild(instance.$el);
instance.visible = true;
instance.dom = instance.$el;
instance.dom.style.zIndex = _popup.PopupManager.nextZIndex();
var verticalOffset = options.offset || 0;
instances.filter(function (item) {
return item.position === position;
}).forEach(function (item) {
verticalOffset += item.$el.offsetHeight + 16;
});
verticalOffset += 16;
instance.verticalOffset = verticalOffset;
instances.push(instance);
return instance;
};
['success', 'warning', 'info', 'error'].forEach(function (type) {
Notification[type] = function (options) {
if (typeof options === 'string' || (0, _vdom.isVNode)(options)) {
options = {
message: options
};
}
options.type = type;
return Notification(options);
};
});
Notification.close = function (id, userOnClose) {
var index = -1;
var len = instances.length;
var instance = instances.filter(function (instance, i) {
if (instance.id === id) {
index = i;
return true;
}
return false;
})[0];
if (!instance) return;
if (typeof userOnClose === 'function') {
userOnClose(instance);
}
instances.splice(index, 1);
if (len <= 1) return;
var position = instance.position;
var removedHeight = instance.dom.offsetHeight;
for (var i = index; i < len - 1; i++) {
if (instances[i].position === position) {
instances[i].dom.style[instance.verticalProperty] = parseInt(instances[i].dom.style[instance.verticalProperty], 10) - removedHeight - 16 + 'px';
}
}
};
Notification.closeAll = function () {
for (var i = instances.length - 1; i >= 0; i--) {
instances[i].close();
}
};
exports.default = Notification;
/***/ }),
/***/ 263:
/***/ (function(module, __webpack_exports__, __webpack_require__) {
"use strict";
Object.defineProperty(__webpack_exports__, "__esModule", { value: true });
/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__babel_loader_node_modules_vue_loader_lib_selector_type_script_index_0_main_vue__ = __webpack_require__(264);
/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__babel_loader_node_modules_vue_loader_lib_selector_type_script_index_0_main_vue___default = __webpack_require__.n(__WEBPACK_IMPORTED_MODULE_0__babel_loader_node_modules_vue_loader_lib_selector_type_script_index_0_main_vue__);
/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__node_modules_vue_loader_lib_template_compiler_index_id_data_v_408e1c07_hasScoped_false_preserveWhitespace_false_buble_transforms_node_modules_vue_loader_lib_selector_type_template_index_0_main_vue__ = __webpack_require__(265);
var normalizeComponent = __webpack_require__(0)
/* script */
/* template */
/* template functional */
var __vue_template_functional__ = false
/* styles */
var __vue_styles__ = null
/* scopeId */
var __vue_scopeId__ = null
/* moduleIdentifier (server only) */
var __vue_module_identifier__ = null
var Component = normalizeComponent(
__WEBPACK_IMPORTED_MODULE_0__babel_loader_node_modules_vue_loader_lib_selector_type_script_index_0_main_vue___default.a,
__WEBPACK_IMPORTED_MODULE_1__node_modules_vue_loader_lib_template_compiler_index_id_data_v_408e1c07_hasScoped_false_preserveWhitespace_false_buble_transforms_node_modules_vue_loader_lib_selector_type_template_index_0_main_vue__["a" /* default */],
__vue_template_functional__,
__vue_styles__,
__vue_scopeId__,
__vue_module_identifier__
)
/* harmony default export */ __webpack_exports__["default"] = (Component.exports);
/***/ }),
/***/ 264:
/***/ (function(module, exports, __webpack_require__) {
"use strict";
exports.__esModule = true;
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
var typeMap = {
success: 'success',
info: 'info',
warning: 'warning',
error: 'error'
};
exports.default = {
data: function data() {
return {
visible: false,
title: '',
message: '',
duration: 4500,
type: '',
showClose: true,
customClass: '',
iconClass: '',
onClose: null,
onClick: null,
closed: false,
verticalOffset: 0,
timer: null,
dangerouslyUseHTMLString: false,
position: 'top-right'
};
},
computed: {
typeClass: function typeClass() {
return this.type && typeMap[this.type] ? 'el-icon-' + typeMap[this.type] : '';
},
horizontalClass: function horizontalClass() {
return this.position.indexOf('right') > -1 ? 'right' : 'left';
},
verticalProperty: function verticalProperty() {
return (/^top-/.test(this.position) ? 'top' : 'bottom'
);
},
positionStyle: function positionStyle() {
var _ref;
return _ref = {}, _ref[this.verticalProperty] = this.verticalOffset + 'px', _ref;
}
},
watch: {
closed: function closed(newVal) {
if (newVal) {
this.visible = false;
this.$el.addEventListener('transitionend', this.destroyElement);
}
}
},
methods: {
destroyElement: function destroyElement() {
this.$el.removeEventListener('transitionend', this.destroyElement);
this.$destroy(true);
this.$el.parentNode.removeChild(this.$el);
},
click: function click() {
if (typeof this.onClick === 'function') {
this.onClick();
}
},
close: function close() {
this.closed = true;
if (typeof this.onClose === 'function') {
this.onClose();
}
},
clearTimer: function clearTimer() {
clearTimeout(this.timer);
},
startTimer: function startTimer() {
var _this = this;
if (this.duration > 0) {
this.timer = setTimeout(function () {
if (!_this.closed) {
_this.close();
}
}, this.duration);
}
},
keydown: function keydown(e) {
if (e.keyCode === 46 || e.keyCode === 8) {
this.clearTimer(); // detele 取消倒计时
} else if (e.keyCode === 27) {
// esc关闭消息
if (!this.closed) {
this.close();
}
} else {
this.startTimer(); // 恢复倒计时
}
}
},
mounted: function mounted() {
var _this2 = this;
if (this.duration > 0) {
this.timer = setTimeout(function () {
if (!_this2.closed) {
_this2.close();
}
}, this.duration);
}
document.addEventListener('keydown', this.keydown);
},
beforeDestroy: function beforeDestroy() {
document.removeEventListener('keydown', this.keydown);
}
};
/***/ }),
/***/ 265:
/***/ (function(module, __webpack_exports__, __webpack_require__) {
"use strict";
var render = function () {var _vm=this;var _h=_vm.$createElement;var _c=_vm._self._c||_h;return _c('transition',{attrs:{"name":"el-notification-fade"}},[_c('div',{directives:[{name:"show",rawName:"v-show",value:(_vm.visible),expression:"visible"}],class:['el-notification', _vm.customClass, _vm.horizontalClass],style:(_vm.positionStyle),attrs:{"role":"alert"},on:{"mouseenter":function($event){_vm.clearTimer()},"mouseleave":function($event){_vm.startTimer()},"click":_vm.click}},[(_vm.type || _vm.iconClass)?_c('i',{staticClass:"el-notification__icon",class:[ _vm.typeClass, _vm.iconClass ]}):_vm._e(),_c('div',{staticClass:"el-notification__group",class:{ 'is-with-icon': _vm.typeClass || _vm.iconClass }},[_c('h2',{staticClass:"el-notification__title",domProps:{"textContent":_vm._s(_vm.title)}}),_c('div',{directives:[{name:"show",rawName:"v-show",value:(_vm.message),expression:"message"}],staticClass:"el-notification__content"},[_vm._t("default",[(!_vm.dangerouslyUseHTMLString)?_c('p',[_vm._v(_vm._s(_vm.message))]):_c('p',{domProps:{"innerHTML":_vm._s(_vm.message)}})])],2),(_vm.showClose)?_c('div',{staticClass:"el-notification__closeBtn el-icon-close",on:{"click":function($event){$event.stopPropagation();_vm.close($event)}}}):_vm._e()])])])}
var staticRenderFns = []
var esExports = { render: render, staticRenderFns: staticRenderFns }
/* harmony default export */ __webpack_exports__["a"] = (esExports);
/***/ }),
/***/ 4:
/***/ (function(module, exports) {
module.exports = require("vue");
/***/ })
/******/ }); | {
"pile_set_name": "Github"
} |
#include <GuiToolbar.au3>
#include <GuiToolTip.au3>
#include <GUIConstantsEx.au3>
#include <WindowsConstants.au3>
#include <Constants.au3>
$Debug_TB = False ; Check ClassName being passed to functions, set to True and use a handle to another control to see it work
Global Enum $idNew = 1000, $idOpen, $idSave, $idHelp
_Main()
Func _Main()
Local $hGUI, $hToolbar, $hToolTip
; Create GUI
$hGUI = GUICreate("Toolbar", 400, 300)
$hToolbar = _GUICtrlToolbar_Create($hGUI)
GUISetState()
; Create ToolTip
$hToolTip = _GUIToolTip_Create($hToolbar)
_GUICtrlToolbar_SetToolTips($hToolbar, $hToolTip)
; Add standard system bitmaps
Switch _GUICtrlToolbar_GetBitmapFlags($hToolbar)
Case 0
_GUICtrlToolbar_AddBitmap($hToolbar, 1, -1, $IDB_STD_SMALL_COLOR)
Case 2
_GUICtrlToolbar_AddBitmap($hToolbar, 1, -1, $IDB_STD_LARGE_COLOR)
EndSwitch
; Add buttons
_GUICtrlToolbar_AddButton($hToolbar, $idNew, $STD_FILENEW)
_GUICtrlToolbar_AddButton($hToolbar, $idOpen, $STD_FILEOPEN)
_GUICtrlToolbar_AddButton($hToolbar, $idSave, $STD_FILESAVE)
_GUICtrlToolbar_AddButtonSep($hToolbar)
_GUICtrlToolbar_AddButton($hToolbar, $idHelp, $STD_HELP)
; Show ToolTip handle
MsgBox(4096, "Information", "ToolTip handle .: 0x" & Hex(_GUICtrlToolbar_GetToolTips($hToolbar)) & @CRLF & _
"IsPtr = " & IsPtr(_GUICtrlToolbar_GetToolTips($hToolbar)) & " IsHWnd = " & IsHWnd(_GUICtrlToolbar_GetToolTips($hToolbar)))
; Loop until user exits
GUIRegisterMsg($WM_NOTIFY, "WM_NOTIFY")
; Loop until user exits
Do
Until GUIGetMsg() = $GUI_EVENT_CLOSE
EndFunc ;==>_Main
; Handle WM_NOTIFY messages
Func WM_NOTIFY($hWnd, $iMsg, $iwParam, $ilParam)
#forceref $hWnd, $iMsg, $iwParam, $ilParam
Local $tInfo, $iID, $iCode
$tInfo = DllStructCreate($tagNMTTDISPINFO, $ilParam)
$iCode = DllStructGetData($tInfo, "Code")
If $iCode = $TTN_GETDISPINFOW Then
$iID = DllStructGetData($tInfo, "IDFrom")
Switch $iID
Case $idNew
DllStructSetData($tInfo, "aText", "New")
Case $idOpen
DllStructSetData($tInfo, "aText", "Open")
Case $idSave
DllStructSetData($tInfo, "aText", "Save")
Case $idHelp
DllStructSetData($tInfo, "aText", "Help")
EndSwitch
EndIf
Return $GUI_RUNDEFMSG
EndFunc ;==>WM_NOTIFY
| {
"pile_set_name": "Github"
} |
<?php
class ControllerAccountReturn extends Controller {
private $error = array();
public function index() {
if (!$this->customer->isLogged()) {
$this->session->data['redirect'] = $this->url->link('account/return', '', true);
$this->response->redirect($this->url->link('account/login', '', true));
}
$this->load->language('account/return');
$this->document->setTitle($this->language->get('heading_title'));
$data['breadcrumbs'] = array();
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_home'),
'href' => $this->url->link('common/home')
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_account'),
'href' => $this->url->link('account/account', '', true)
);
$url = '';
if (isset($this->request->get['page'])) {
$url .= '&page=' . $this->request->get['page'];
}
$data['breadcrumbs'][] = array(
'text' => $this->language->get('heading_title'),
'href' => $this->url->link('account/return', $url, true)
);
$data['heading_title'] = $this->language->get('heading_title');
$data['text_empty'] = $this->language->get('text_empty');
$data['column_return_id'] = $this->language->get('column_return_id');
$data['column_order_id'] = $this->language->get('column_order_id');
$data['column_status'] = $this->language->get('column_status');
$data['column_date_added'] = $this->language->get('column_date_added');
$data['column_customer'] = $this->language->get('column_customer');
$data['button_view'] = $this->language->get('button_view');
$data['button_continue'] = $this->language->get('button_continue');
$this->load->model('account/return');
if (isset($this->request->get['page'])) {
$page = $this->request->get['page'];
} else {
$page = 1;
}
$data['returns'] = array();
$return_total = $this->model_account_return->getTotalReturns();
$results = $this->model_account_return->getReturns(($page - 1) * 10, 10);
foreach ($results as $result) {
$data['returns'][] = array(
'return_id' => $result['return_id'],
'order_id' => $result['order_id'],
'name' => $result['firstname'] . ' ' . $result['lastname'],
'status' => $result['status'],
'date_added' => date($this->language->get('date_format_short'), strtotime($result['date_added'])),
'href' => $this->url->link('account/return/info', 'return_id=' . $result['return_id'] . $url, true)
);
}
$pagination = new Pagination();
$pagination->total = $return_total;
$pagination->page = $page;
$pagination->limit = $this->config->get($this->config->get('config_theme') . '_product_limit');
$pagination->url = $this->url->link('account/return', 'page={page}', true);
$data['pagination'] = $pagination->render();
$data['results'] = sprintf($this->language->get('text_pagination'), ($return_total) ? (($page - 1) * $this->config->get($this->config->get('config_theme') . '_product_limit')) + 1 : 0, ((($page - 1) * $this->config->get($this->config->get('config_theme') . '_product_limit')) > ($return_total - $this->config->get($this->config->get('config_theme') . '_product_limit'))) ? $return_total : ((($page - 1) * $this->config->get($this->config->get('config_theme') . '_product_limit')) + $this->config->get($this->config->get('config_theme') . '_product_limit')), $return_total, ceil($return_total / $this->config->get($this->config->get('config_theme') . '_product_limit')));
$data['continue'] = $this->url->link('account/account', '', true);
$data['column_left'] = $this->load->controller('common/column_left');
$data['column_right'] = $this->load->controller('common/column_right');
$data['content_top'] = $this->load->controller('common/content_top');
$data['content_bottom'] = $this->load->controller('common/content_bottom');
$data['footer'] = $this->load->controller('common/footer');
$data['header'] = $this->load->controller('common/header');
$this->response->setOutput($this->load->view('account/return_list', $data));
}
public function info() {
$this->load->language('account/return');
if (isset($this->request->get['return_id'])) {
$return_id = $this->request->get['return_id'];
} else {
$return_id = 0;
}
if (!$this->customer->isLogged()) {
$this->session->data['redirect'] = $this->url->link('account/return/info', 'return_id=' . $return_id, true);
$this->response->redirect($this->url->link('account/login', '', true));
}
$this->load->model('account/return');
$return_info = $this->model_account_return->getReturn($return_id);
if ($return_info) {
$this->document->setTitle($this->language->get('text_return'));
$data['breadcrumbs'] = array();
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_home'),
'href' => $this->url->link('common/home', '', true)
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_account'),
'href' => $this->url->link('account/account', '', true)
);
$url = '';
if (isset($this->request->get['page'])) {
$url .= '&page=' . $this->request->get['page'];
}
$data['breadcrumbs'][] = array(
'text' => $this->language->get('heading_title'),
'href' => $this->url->link('account/return', $url, true)
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_return'),
'href' => $this->url->link('account/return/info', 'return_id=' . $this->request->get['return_id'] . $url, true)
);
$data['heading_title'] = $this->language->get('text_return');
$data['text_return_detail'] = $this->language->get('text_return_detail');
$data['text_return_id'] = $this->language->get('text_return_id');
$data['text_order_id'] = $this->language->get('text_order_id');
$data['text_date_ordered'] = $this->language->get('text_date_ordered');
$data['text_customer'] = $this->language->get('text_customer');
$data['text_email'] = $this->language->get('text_email');
$data['text_telephone'] = $this->language->get('text_telephone');
$data['text_status'] = $this->language->get('text_status');
$data['text_date_added'] = $this->language->get('text_date_added');
$data['text_product'] = $this->language->get('text_product');
$data['text_reason'] = $this->language->get('text_reason');
$data['text_comment'] = $this->language->get('text_comment');
$data['text_history'] = $this->language->get('text_history');
$data['text_no_results'] = $this->language->get('text_no_results');
$data['column_product'] = $this->language->get('column_product');
$data['column_model'] = $this->language->get('column_model');
$data['column_quantity'] = $this->language->get('column_quantity');
$data['column_opened'] = $this->language->get('column_opened');
$data['column_reason'] = $this->language->get('column_reason');
$data['column_action'] = $this->language->get('column_action');
$data['column_date_added'] = $this->language->get('column_date_added');
$data['column_status'] = $this->language->get('column_status');
$data['column_comment'] = $this->language->get('column_comment');
$data['button_continue'] = $this->language->get('button_continue');
$data['return_id'] = $return_info['return_id'];
$data['order_id'] = $return_info['order_id'];
$data['date_ordered'] = date($this->language->get('date_format_short'), strtotime($return_info['date_ordered']));
$data['date_added'] = date($this->language->get('date_format_short'), strtotime($return_info['date_added']));
$data['firstname'] = $return_info['firstname'];
$data['lastname'] = $return_info['lastname'];
$data['email'] = $return_info['email'];
$data['telephone'] = $return_info['telephone'];
$data['product'] = $return_info['product'];
$data['model'] = $return_info['model'];
$data['quantity'] = $return_info['quantity'];
$data['reason'] = $return_info['reason'];
$data['opened'] = $return_info['opened'] ? $this->language->get('text_yes') : $this->language->get('text_no');
$data['comment'] = nl2br($return_info['comment']);
$data['action'] = $return_info['action'];
$data['histories'] = array();
$results = $this->model_account_return->getReturnHistories($this->request->get['return_id']);
foreach ($results as $result) {
$data['histories'][] = array(
'date_added' => date($this->language->get('date_format_short'), strtotime($result['date_added'])),
'status' => $result['status'],
'comment' => nl2br($result['comment'])
);
}
$data['continue'] = $this->url->link('account/return', $url, true);
$data['column_left'] = $this->load->controller('common/column_left');
$data['column_right'] = $this->load->controller('common/column_right');
$data['content_top'] = $this->load->controller('common/content_top');
$data['content_bottom'] = $this->load->controller('common/content_bottom');
$data['footer'] = $this->load->controller('common/footer');
$data['header'] = $this->load->controller('common/header');
$this->response->setOutput($this->load->view('account/return_info', $data));
} else {
$this->document->setTitle($this->language->get('text_return'));
$data['breadcrumbs'] = array();
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_home'),
'href' => $this->url->link('common/home')
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_account'),
'href' => $this->url->link('account/account', '', true)
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('heading_title'),
'href' => $this->url->link('account/return', '', true)
);
$url = '';
if (isset($this->request->get['page'])) {
$url .= '&page=' . $this->request->get['page'];
}
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_return'),
'href' => $this->url->link('account/return/info', 'return_id=' . $return_id . $url, true)
);
$data['heading_title'] = $this->language->get('text_return');
$data['text_error'] = $this->language->get('text_error');
$data['button_continue'] = $this->language->get('button_continue');
$data['continue'] = $this->url->link('account/return', '', true);
$data['column_left'] = $this->load->controller('common/column_left');
$data['column_right'] = $this->load->controller('common/column_right');
$data['content_top'] = $this->load->controller('common/content_top');
$data['content_bottom'] = $this->load->controller('common/content_bottom');
$data['footer'] = $this->load->controller('common/footer');
$data['header'] = $this->load->controller('common/header');
$this->response->setOutput($this->load->view('error/not_found', $data));
}
}
public function add() {
$this->load->language('account/return');
$this->load->model('account/return');
if (($this->request->server['REQUEST_METHOD'] == 'POST') && $this->validate()) {
$return_id = $this->model_account_return->addReturn($this->request->post);
// Add to activity log
if ($this->config->get('config_customer_activity')) {
$this->load->model('account/activity');
if ($this->customer->isLogged()) {
$activity_data = array(
'customer_id' => $this->customer->getId(),
'name' => $this->customer->getFirstName() . ' ' . $this->customer->getLastName(),
'return_id' => $return_id
);
$this->model_account_activity->addActivity('return_account', $activity_data);
} else {
$activity_data = array(
'name' => $this->request->post['firstname'] . ' ' . $this->request->post['lastname'],
'return_id' => $return_id
);
$this->model_account_activity->addActivity('return_guest', $activity_data);
}
}
$this->response->redirect($this->url->link('account/return/success', '', true));
}
$this->document->setTitle($this->language->get('heading_title'));
$this->document->addScript('catalog/view/javascript/jquery/datetimepicker/moment.js');
$this->document->addScript('catalog/view/javascript/jquery/datetimepicker/locale/'.$this->session->data['language'].'.js');
$this->document->addScript('catalog/view/javascript/jquery/datetimepicker/bootstrap-datetimepicker.min.js');
$this->document->addStyle('catalog/view/javascript/jquery/datetimepicker/bootstrap-datetimepicker.min.css');
$data['breadcrumbs'] = array();
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_home'),
'href' => $this->url->link('common/home')
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_account'),
'href' => $this->url->link('account/account', '', true)
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('heading_title'),
'href' => $this->url->link('account/return/add', '', true)
);
$data['heading_title'] = $this->language->get('heading_title');
$data['text_description'] = $this->language->get('text_description');
$data['text_order'] = $this->language->get('text_order');
$data['text_product'] = $this->language->get('text_product');
$data['text_yes'] = $this->language->get('text_yes');
$data['text_no'] = $this->language->get('text_no');
$data['entry_order_id'] = $this->language->get('entry_order_id');
$data['entry_date_ordered'] = $this->language->get('entry_date_ordered');
$data['entry_firstname'] = $this->language->get('entry_firstname');
$data['entry_lastname'] = $this->language->get('entry_lastname');
$data['entry_email'] = $this->language->get('entry_email');
$data['entry_telephone'] = $this->language->get('entry_telephone');
$data['entry_product'] = $this->language->get('entry_product');
$data['entry_model'] = $this->language->get('entry_model');
$data['entry_quantity'] = $this->language->get('entry_quantity');
$data['entry_reason'] = $this->language->get('entry_reason');
$data['entry_opened'] = $this->language->get('entry_opened');
$data['entry_fault_detail'] = $this->language->get('entry_fault_detail');
$data['button_submit'] = $this->language->get('button_submit');
$data['button_back'] = $this->language->get('button_back');
if (isset($this->error['warning'])) {
$data['error_warning'] = $this->error['warning'];
} else {
$data['error_warning'] = '';
}
if (isset($this->error['order_id'])) {
$data['error_order_id'] = $this->error['order_id'];
} else {
$data['error_order_id'] = '';
}
if (isset($this->error['firstname'])) {
$data['error_firstname'] = $this->error['firstname'];
} else {
$data['error_firstname'] = '';
}
if (isset($this->error['lastname'])) {
$data['error_lastname'] = $this->error['lastname'];
} else {
$data['error_lastname'] = '';
}
if (isset($this->error['email'])) {
$data['error_email'] = $this->error['email'];
} else {
$data['error_email'] = '';
}
if (isset($this->error['telephone'])) {
$data['error_telephone'] = $this->error['telephone'];
} else {
$data['error_telephone'] = '';
}
if (isset($this->error['product'])) {
$data['error_product'] = $this->error['product'];
} else {
$data['error_product'] = '';
}
if (isset($this->error['model'])) {
$data['error_model'] = $this->error['model'];
} else {
$data['error_model'] = '';
}
if (isset($this->error['reason'])) {
$data['error_reason'] = $this->error['reason'];
} else {
$data['error_reason'] = '';
}
$data['action'] = $this->url->link('account/return/add', '', true);
$this->load->model('account/order');
if (isset($this->request->get['order_id'])) {
$order_info = $this->model_account_order->getOrder($this->request->get['order_id']);
}
$this->load->model('catalog/product');
if (isset($this->request->get['product_id'])) {
$product_info = $this->model_catalog_product->getProduct($this->request->get['product_id']);
}
if (isset($this->request->post['order_id'])) {
$data['order_id'] = $this->request->post['order_id'];
} elseif (!empty($order_info)) {
$data['order_id'] = $order_info['order_id'];
} else {
$data['order_id'] = '';
}
if (isset($this->request->post['date_ordered'])) {
$data['date_ordered'] = $this->request->post['date_ordered'];
} elseif (!empty($order_info)) {
$data['date_ordered'] = date('Y-m-d', strtotime($order_info['date_added']));
} else {
$data['date_ordered'] = '';
}
if (isset($this->request->post['firstname'])) {
$data['firstname'] = $this->request->post['firstname'];
} elseif (!empty($order_info)) {
$data['firstname'] = $order_info['firstname'];
} else {
$data['firstname'] = $this->customer->getFirstName();
}
if (isset($this->request->post['lastname'])) {
$data['lastname'] = $this->request->post['lastname'];
} elseif (!empty($order_info)) {
$data['lastname'] = $order_info['lastname'];
} else {
$data['lastname'] = $this->customer->getLastName();
}
if (isset($this->request->post['email'])) {
$data['email'] = $this->request->post['email'];
} elseif (!empty($order_info)) {
$data['email'] = $order_info['email'];
} else {
$data['email'] = $this->customer->getEmail();
}
if (isset($this->request->post['telephone'])) {
$data['telephone'] = $this->request->post['telephone'];
} elseif (!empty($order_info)) {
$data['telephone'] = $order_info['telephone'];
} else {
$data['telephone'] = $this->customer->getTelephone();
}
if (isset($this->request->post['product'])) {
$data['product'] = $this->request->post['product'];
} elseif (!empty($product_info)) {
$data['product'] = $product_info['name'];
} else {
$data['product'] = '';
}
if (isset($this->request->post['model'])) {
$data['model'] = $this->request->post['model'];
} elseif (!empty($product_info)) {
$data['model'] = $product_info['model'];
} else {
$data['model'] = '';
}
if (isset($this->request->post['quantity'])) {
$data['quantity'] = $this->request->post['quantity'];
} else {
$data['quantity'] = 1;
}
if (isset($this->request->post['opened'])) {
$data['opened'] = $this->request->post['opened'];
} else {
$data['opened'] = false;
}
if (isset($this->request->post['return_reason_id'])) {
$data['return_reason_id'] = $this->request->post['return_reason_id'];
} else {
$data['return_reason_id'] = '';
}
$this->load->model('localisation/return_reason');
$data['return_reasons'] = $this->model_localisation_return_reason->getReturnReasons();
if (isset($this->request->post['comment'])) {
$data['comment'] = $this->request->post['comment'];
} else {
$data['comment'] = '';
}
// Captcha
if ($this->config->get($this->config->get('config_captcha') . '_status') && in_array('return', (array)$this->config->get('config_captcha_page'))) {
$data['captcha'] = $this->load->controller('extension/captcha/' . $this->config->get('config_captcha'), $this->error);
} else {
$data['captcha'] = '';
}
if ($this->config->get('config_return_id')) {
$this->load->model('catalog/information');
$information_info = $this->model_catalog_information->getInformation($this->config->get('config_return_id'));
if ($information_info) {
$data['text_agree'] = sprintf($this->language->get('text_agree'), $this->url->link('information/information/agree', 'information_id=' . $this->config->get('config_return_id'), true), $information_info['title'], $information_info['title']);
} else {
$data['text_agree'] = '';
}
} else {
$data['text_agree'] = '';
}
if (isset($this->request->post['agree'])) {
$data['agree'] = $this->request->post['agree'];
} else {
$data['agree'] = false;
}
$data['back'] = $this->url->link('account/account', '', true);
$data['column_left'] = $this->load->controller('common/column_left');
$data['column_right'] = $this->load->controller('common/column_right');
$data['content_top'] = $this->load->controller('common/content_top');
$data['content_bottom'] = $this->load->controller('common/content_bottom');
$data['footer'] = $this->load->controller('common/footer');
$data['header'] = $this->load->controller('common/header');
$this->response->setOutput($this->load->view('account/return_form', $data));
}
protected function validate() {
if (!$this->request->post['order_id']) {
$this->error['order_id'] = $this->language->get('error_order_id');
}
if ((utf8_strlen(trim($this->request->post['firstname'])) < 1) || (utf8_strlen(trim($this->request->post['firstname'])) > 32)) {
$this->error['firstname'] = $this->language->get('error_firstname');
}
if ((utf8_strlen(trim($this->request->post['lastname'])) < 1) || (utf8_strlen(trim($this->request->post['lastname'])) > 32)) {
$this->error['lastname'] = $this->language->get('error_lastname');
}
if ((utf8_strlen($this->request->post['email']) > 96) || !preg_match($this->config->get('config_mail_regexp'), $this->request->post['email'])) {
$this->error['email'] = $this->language->get('error_email');
}
if ((utf8_strlen($this->request->post['telephone']) < 3) || (utf8_strlen($this->request->post['telephone']) > 32)) {
$this->error['telephone'] = $this->language->get('error_telephone');
}
if ((utf8_strlen($this->request->post['product']) < 1) || (utf8_strlen($this->request->post['product']) > 255)) {
$this->error['product'] = $this->language->get('error_product');
}
if ((utf8_strlen($this->request->post['model']) < 1) || (utf8_strlen($this->request->post['model']) > 64)) {
$this->error['model'] = $this->language->get('error_model');
}
if (empty($this->request->post['return_reason_id'])) {
$this->error['reason'] = $this->language->get('error_reason');
}
if ($this->config->get($this->config->get('config_captcha') . '_status') && in_array('return', (array)$this->config->get('config_captcha_page'))) {
$captcha = $this->load->controller('extension/captcha/' . $this->config->get('config_captcha') . '/validate');
if ($captcha) {
$this->error['captcha'] = $captcha;
}
}
if ($this->config->get('config_return_id')) {
$this->load->model('catalog/information');
$information_info = $this->model_catalog_information->getInformation($this->config->get('config_return_id'));
if ($information_info && !isset($this->request->post['agree'])) {
$this->error['warning'] = sprintf($this->language->get('error_agree'), $information_info['title']);
}
}
return !$this->error;
}
public function success() {
$this->load->language('account/return');
$this->document->setTitle($this->language->get('heading_title'));
$data['breadcrumbs'] = array();
$data['breadcrumbs'][] = array(
'text' => $this->language->get('text_home'),
'href' => $this->url->link('common/home')
);
$data['breadcrumbs'][] = array(
'text' => $this->language->get('heading_title'),
'href' => $this->url->link('account/return', '', true)
);
$data['heading_title'] = $this->language->get('heading_title');
$data['text_message'] = $this->language->get('text_message');
$data['button_continue'] = $this->language->get('button_continue');
$data['continue'] = $this->url->link('common/home');
$data['column_left'] = $this->load->controller('common/column_left');
$data['column_right'] = $this->load->controller('common/column_right');
$data['content_top'] = $this->load->controller('common/content_top');
$data['content_bottom'] = $this->load->controller('common/content_bottom');
$data['footer'] = $this->load->controller('common/footer');
$data['header'] = $this->load->controller('common/header');
$this->response->setOutput($this->load->view('common/success', $data));
}
}
| {
"pile_set_name": "Github"
} |
---
title: "Metodi Tradizionali"
---
{% include toc title="Indice" %}
### Lettura necessaria
Nell'ultimo periodo ci sono stati numerosi miglioramenti e avanzamenti ai metodi di installazione di CFW.
Per questo motivo, è consigliato usare il metodo più recente descritto nella sezione [Cominciamo!](get-started).
Ciononostante, i metodi "tradizionali", più vecchi, vengono mantenuti qui per vari motivi. Ti servirà comunque aver eseguito precedentemente [Seedminer](seedminer).
Se ti dovesse servire una mano, puoi entrare nel canale [Nintendo Homebrew su Discord](https://discord.gg/MWxPgEp) e chiedere, in inglese.
#### Sezione I - Test di Compatibilità
I seguenti exploit fanno uso di una delle due applicazioni DS integrate nel 3DS: Connessioni Nintendo DS e Modalità download DS.
Se sia Connessioni Nintendo DS che Modalità download DS non funzionano, dovrai ripararli con [TWLFix-3DS](https://github.com/MechanicalDragon0687/TWLFix-3DS/releases/) usando un entrypoint per gli homebrew, come Pichaxx.
#### Test Connessioni Nintendo DS (usato per Fredtool)
1. Vai su "Impostazioni della console", "Impostazioni Internet", quindi "Connessioni Nintendo DS"
1. Premi "OK"
1. Se la tua console carica il menu "Configurazione Nintendo Wi-Fi Connection", il test ha avuto successo
+ Se lo schermo rimane nero o sembra rimanere bloccato, il test è fallito
1. Esci da questo menu
#### Test Modalità Download DS (usato per Frogtool)
1. Avvia l'applicazione "Modalità download" ({: height="24px" width="24px"})
1. Seleziona "Nintendo DS"
1. Se la tua console avvia "Scarica software tramite Modalità download DS", il test ha avuto successo
+ Se lo schermo rimane nero o sembra rimanere bloccato, il test è fallito
1. Esci da questo menu
___
1. [BB3-USM](installing-boot9strap-(usm)): Seedminer + BannerBomb3 + unSAFE_MODE
+ Questo metodo richiede che i pulsanti dorsali funzionino
1. [BannerBomb3](bannerbomb3): Seedminer + BannerBomb3 + Fredtool
+ Questo è il metodo consigliato se i tuoi pulsanti dorsali non funzionano
1. [Pichaxx](homebrew-launcher-(pichaxx)): Seedminer + Pichaxx + Frogtool
+ Questo è il metodo consigliato se il tuo menu Gestione Nintendo DSiWare non funziona | {
"pile_set_name": "Github"
} |
<?php
/*
You may not change or alter any portion of this comment or credits
of supporting developers from this source code or any supporting source code
which is considered copyrighted (c) material of the original comment or credit authors.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
*/
use Xoops\Core\PreloadItem;
/**
* Page core preloads
*
* @copyright XOOPS Project (http://xoops.org)
* @license GNU GPL 2 or later (http://www.gnu.org/licenses/gpl-2.0.html)
* @package page
* @since 2.6.0
* @author DuGris (aka Laurent JEN)
* @version $Id$
*/
class PagePreload extends PreloadItem
{
/**
* listen for core.include.common.classmaps
* add any module specific class map entries
*
* @param mixed $args not used
*
* @return void
*/
public static function eventCoreIncludeCommonClassmaps($args)
{
$path = dirname(__DIR__);
XoopsLoad::addMap(array(
'page' => $path . '/class/helper.php',
));
}
}
| {
"pile_set_name": "Github"
} |
# xml 配置文件
* Spring Boot 提倡零配置,即无xml配置,但实际项目中,可能有一些特殊要求你必须使用xml配置,这时我们可以通过Spring 提供的@ImportResource来加载xml配置
* @ImportResource({"classpath:some-context.xml","classpath:other-context.xml"})
# Spring Boot 的自动配置的原理
* Spring Boot 在进行SpringApplication 对象实例化时会加载 META-INF/spring.factories文件。将该配置文件中的配置载入到Spring 容器。
* spring-boot.jar/META-INF下的spring.factories
# 条件注解
> SpringBoot内部提供了特有的注解:条件注解(Conditional Annotation)。
* 比如@ConditionalOnBean、@ConditionalOnClass、@ConditionalOnExpression、@ConditionalOnMissingBean等
* @ConditionalOnClass会检查类加载器中是否存在对应的类,如果有的话被注解修饰的类就有资格被Spring容器所注册,否则会被skip。
# 静态资源
设置静态资源放到指定路径下
* spring.resources.static-locations=classpath:/META-INF/resources/,classpath:/static/
# 自定义消息转化器
> 自定义消息转化器,只需要在@Configuration的类中添加消息转化器的@bean加入到Spring容器,就会被Spring boot自动加入到容器中。
```java
@Bean
public StringHttpMessageConverter stringHttpMessageConverter() {
StringHttpMessageConverter converter = new StringHttpMessageConverter(Charset.forName("UTF-8"));
return converter;
}
```
# 自定义SpringMVC的配置
有些时候我们需要自己配置SpringMVC而不是采用默认,比如增加一个拦截器,这个时候就得通过继承WebMvcConfigureAdapter 然后重写父类中的方法进行扩展。
```java
@Configuration
public class SpringMVCConfig extends WebMvcConfigurerAdapter{
@Autowired
private UserLoginHandlerInterceptor userLoginHandlerInterceptor;
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(userLoginHandlerInterceptor).addPathPatterns("/api/user/**");
}
}
```
# 设置Mybatis 和 Spring Boot 整合
> Mybatis 和 Spring Boot的整合有两种方式:
* 第一种:使用mybatis官方提供的Spring Boot整合包实现,地址:https://github.com/mybatis/spring-boot-starter
* 第二种:使用mybatis-spring整合的方式,传统的方式
# 设置事务管理
在Spring Boot中推荐使用@Transaction注解来声明事务
当引入jdbc依赖后,spring boot会自动默认分别注入DataSourceTransactionManager 或者 JpaTransactionManager 所以我们不需要任何额外配置就可以用@Transaction注解进行事务的配置。
# redis spring 整合
# httpClient
多例
# 设置RabbitMQ 和 Spring的整合
> pom.xml
```xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
```
> 配置queue
```java
@Configuration
public class RabbitMQSpringConfig {
@Autowired
private ConnectionFactory connectionFactory;
@Bean
public RabbitAdmin rabbitAdmin() {
return new RabbitAdmin(connectionFactory)
}
@Bean
public Queue blogUserLoginQueue() {
return new Queue("BLOG-USER-LOGIN-QUEUE", true);
}
}
```
> 设置监听
* @component 在类上
* @RabbitListener(queues = "BLOG-USER-LOGIN-QUEUE")
# dubbo 整合
```java
@ImportResource({"classpath:dubbo/dubbo-consumer.xml"})
```
| {
"pile_set_name": "Github"
} |
# hacky rules to try to activate lvm when we get new block devs...
#
# Copyright 2008, Red Hat, Inc.
# Jeremy Katz <katzj@redhat.com>
SUBSYSTEM!="block", GOTO="lvm_end"
ACTION!="add|change", GOTO="lvm_end"
# Also don't process disks that are slated to be a multipath device
ENV{DM_MULTIPATH_DEVICE_PATH}=="1", GOTO="lvm_end"
KERNEL=="dm-[0-9]*", ACTION=="add", GOTO="lvm_end"
ENV{ID_FS_TYPE}!="LVM?_member", GOTO="lvm_end"
PROGRAM=="/bin/sh -c 'for i in $sys/$devpath/holders/dm-[0-9]*; do [ -e $$i ] && exit 0; done; exit 1;' ", \
GOTO="lvm_end"
RUN+="/sbin/initqueue --settled --onetime --unique /sbin/lvm_scan"
RUN+="/sbin/initqueue --timeout --name 51-lvm_scan --onetime --unique /sbin/lvm_scan --partial"
RUN+="/bin/sh -c '>/tmp/.lvm_scan-%k;'"
LABEL="lvm_end"
| {
"pile_set_name": "Github"
} |
#!/usr/bin/env ruby
require "bundler/setup"
require "webinspector"
# You can add fixtures and/or initialization code here to make experimenting
# with your gem easier. You can also use a different console, if you like.
# (If you use this, don't forget to add pry to your Gemfile!)
# require "pry"
# Pry.start
require "irb"
IRB.start
| {
"pile_set_name": "Github"
} |
import sbt._
import Keys._
import com.typesafe.sbt.packager.docker.{Cmd, DockerKeys}
import RestartCommand._
object CommonSettingsPlugin extends AutoPlugin with DockerKeys {
override def trigger = allRequirements
override lazy val projectSettings = Seq(
updateOptions := updateOptions.value.withCachedResolution(cachedResoluton = true),
resolvers ++= Seq(
"Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
),
libraryDependencies ++= Seq(
"com.github.nscala-time" %% "nscala-time" % "2.16.0",
"ch.qos.logback" % "logback-classic" % "1.1.7",
"org.scalatest" %% "scalatest" % "3.0.1" % "test",
"org.mockito" % "mockito-core" % "1.9.5" % "test",
"commons-io" % "commons-io" % "2.4" % "test",
"org.scalacheck" %% "scalacheck" % "1.13.4" % "test"
),
commands ++= Seq(restart)
)
} | {
"pile_set_name": "Github"
} |
/*
Copyright (C) 2014 2015 2016 Johan Mattsson
This library is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as
published by the Free Software Foundation; either version 3 of the
License, or (at your option) any later version.
This library is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
*/
using Cairo;
using Math;
namespace BirdFont {
internal class SettingsTab : SettingsDisplay {
string restart_message = "You need to restart the program in order to apply this setting.";
public SettingsTab () {
base ();
create_setting_items ();
}
public override void create_setting_items () {
tools.clear ();
// setting items
tools.add (new SettingsItem.head_line (t_("Settings")));
SpinButton stroke_width = new SpinButton ("stroke_width");
tools.add (new SettingsItem (stroke_width, t_("Stroke width")));
stroke_width.set_max (4);
stroke_width.set_min (0.002);
stroke_width.set_value_round (1);
if (Preferences.get ("stroke_width_for_open_paths") != "") {
stroke_width.set_value (Preferences.get ("stroke_width_for_open_paths"));
}
stroke_width.new_value_action.connect ((self) => {
Glyph g = MainWindow.get_current_glyph ();
Path.stroke_width = stroke_width.get_value ();
g.redraw_area (0, 0, g.allocation.width, g.allocation.height);
Preferences.set ("stroke_width_for_open_paths", stroke_width.get_display_value ());
MainWindow.get_toolbox ().redraw ((int) stroke_width.x, (int) stroke_width.y, 70, 70);
});
Path.stroke_width = stroke_width.get_value ();
// adjust precision
string precision_value = Preferences.get ("precision");
if (precision_value != "") {
precision.set_value (precision_value);
} else {
#if ANDROID
precision.set_value_round (0.5);
#else
precision.set_value_round (1);
#endif
}
precision.new_value_action.connect ((self) => {
MainWindow.get_toolbox ().select_tool (precision);
Preferences.set ("precision", self.get_display_value ());
MainWindow.get_toolbox ().redraw ((int) precision.x, (int) precision.y, 70, 70);
});
precision.select_action.connect((self) => {
DrawingTools.pen_tool.set_precision (((SpinButton)self).get_value ());
});
precision.set_min (0.001);
precision.set_max (1);
tools.add (new SettingsItem (precision, t_("Precision for pen tool")));
Tool show_all_line_handles = new Tool ("show_all_line_handles");
show_all_line_handles.select_action.connect((self) => {
Path.show_all_line_handles = !Path.show_all_line_handles;
Glyph g = MainWindow.get_current_glyph ();
g.redraw_area (0, 0, g.allocation.width, g.allocation.height);
});
tools.add (new SettingsItem (show_all_line_handles, t_("Show or hide control point handles")));
Tool fill_open_path = new Tool ("fill_open_path");
fill_open_path.select_action.connect((self) => {
Path.fill_open_path = true;
});
fill_open_path.deselect_action.connect((self) => {
Path.fill_open_path = false;
});
tools.add (new SettingsItem (fill_open_path, t_("Fill paths.")));
Tool ttf_units = new Tool ("ttf_units");
ttf_units.select_action.connect((self) => {
GridTool.ttf_units = !GridTool.ttf_units;
Preferences.set ("ttf_units", @"$(GridTool.ttf_units)");
});
tools.add (new SettingsItem (ttf_units, t_("Use TTF units.")));
SpinButton freehand_samples = new SpinButton ("freehand_samples_per_point");
tools.add (new SettingsItem (freehand_samples, t_("Number of points added by the freehand tool")));
freehand_samples.set_max (9);
freehand_samples.set_min (0.002);
if (BirdFont.android) {
freehand_samples.set_value_round (2.5);
} else {
freehand_samples.set_value_round (1);
}
if (Preferences.get ("freehand_samples") != "") {
freehand_samples.set_value (Preferences.get ("freehand_samples"));
DrawingTools.track_tool.set_samples_per_point (freehand_samples.get_value ());
}
freehand_samples.new_value_action.connect ((self) => {
DrawingTools.track_tool.set_samples_per_point (freehand_samples.get_value ());
});
SpinButton simplification_threshold = new SpinButton ("simplification_threshold");
simplification_threshold.set_value_round (0.5);
tools.add (new SettingsItem (simplification_threshold, t_("Path simplification threshold")));
simplification_threshold.set_max (5);
freehand_samples.set_min (0.002);
if (Preferences.get ("simplification_threshold") != "") {
freehand_samples.set_value (Preferences.get ("simplification_threshold"));
DrawingTools.pen_tool.set_simplification_threshold (simplification_threshold.get_value ());
}
freehand_samples.new_value_action.connect ((self) => {
DrawingTools.pen_tool.set_simplification_threshold (simplification_threshold.get_value ());
});
Tool translate_ui = new Tool ("translate");
translate_ui.select_action.connect((self) => {
Preferences.set ("translate", @"true");
ThemeTab.redraw_ui ();
translate_ui.selected = true;
MainWindow.show_dialog (new MessageDialog (restart_message));
});
translate_ui.deselect_action.connect((self) => {
Preferences.set ("translate", @"false");
translate_ui.selected = false;
MainWindow.show_dialog (new MessageDialog (restart_message));
ThemeTab.redraw_ui ();
});
string translate_setting = Preferences.get ("translate");
translate_ui.selected = translate_setting == "" || translate_setting == "true";
tools.add (new SettingsItem (translate_ui, t_("Translate")));
Tool themes = new Tool ("open_theme_tab");
themes.set_icon ("theme");
themes.select_action.connect((self) => {
MenuTab.show_theme_tab ();
});
tools.add (new SettingsItem (themes, t_("Color theme")));
SpinButton num_backups = new SpinButton ("num_backups");
tools.add (new SettingsItem (num_backups, t_("Number of backups per font")));
num_backups.set_integers (true);
num_backups.set_max (100);
num_backups.set_min (0);
num_backups.set_value ("20");
string current_num_backups = Preferences.get ("num_backups");
if (current_num_backups != "") {
num_backups.set_value (current_num_backups);
}
num_backups.new_value_action.connect ((self) => {
Preferences.set ("num_backups", num_backups.get_short_display_value ());
});
Tool load_backups = new Tool ("load_backups");
load_backups.select_action.connect((self) => {
BackupTab backups = new BackupTab ();
MainWindow.tabs.add_unique_tab (backups);
MainWindow.tabs.select_tab_name ("Backups");
load_backups.selected = false;
});
tools.add (new SettingsItem (load_backups, t_("Load a backup font")));
tools.add (new SettingsItem.head_line (t_("Key Bindings")));
foreach (MenuItem menu_item in MainWindow.get_menu ().sorted_menu_items) {
tools.add (new SettingsItem.key_binding (menu_item));
}
}
public override string get_label () {
return t_("Settings");
}
public override string get_name () {
return "Settings";
}
}
}
| {
"pile_set_name": "Github"
} |
# vue3 renderer
| {
"pile_set_name": "Github"
} |
#!/usr/bin/env perl
#
# namespace.pl. Mon Aug 30 2004
#
# Perform a name space analysis on the linux kernel.
#
# Copyright Keith Owens <kaos@ocs.com.au>. GPL.
#
# Invoke by changing directory to the top of the kernel object
# tree then namespace.pl, no parameters.
#
# Tuned for 2.1.x kernels with the new module handling, it will
# work with 2.0 kernels as well.
#
# Last change 2.6.9-rc1, adding support for separate source and object
# trees.
#
# The source must be compiled/assembled first, the object files
# are the primary input to this script. Incomplete or missing
# objects will result in a flawed analysis. Compile both vmlinux
# and modules.
#
# Even with complete objects, treat the result of the analysis
# with caution. Some external references are only used by
# certain architectures, others with certain combinations of
# configuration parameters. Ideally the source should include
# something like
#
# #ifndef CONFIG_...
# static
# #endif
# symbol_definition;
#
# so the symbols are defined as static unless a particular
# CONFIG_... requires it to be external.
#
# A symbol that is suffixed with '(export only)' has these properties
#
# * It is global.
# * It is marked EXPORT_SYMBOL or EXPORT_SYMBOL_GPL, either in the same
# source file or a different source file.
# * Given the current .config, nothing uses the symbol.
#
# The symbol is a candidate for conversion to static, plus removal of the
# export. But be careful that a different .config might use the symbol.
#
#
# Name space analysis and cleanup is an iterative process. You cannot
# expect to find all the problems in a single pass.
#
# * Identify possibly unnecessary global declarations, verify that they
# really are unnecessary and change them to static.
# * Compile and fix up gcc warnings about static, removing dead symbols
# as necessary.
# * make clean and rebuild with different configs (especially
# CONFIG_MODULES=n) to see which symbols are being defined when the
# config does not require them. These symbols bloat the kernel object
# for no good reason, which is frustrating for embedded systems.
# * Wrap config sensitive symbols in #ifdef CONFIG_foo, as long as the
# code does not get too ugly.
# * Repeat the name space analysis until you can live with with the
# result.
#
use warnings;
use strict;
use File::Find;
my $nm = ($ENV{'NM'} || "nm") . " -p";
my $objdump = ($ENV{'OBJDUMP'} || "objdump") . " -s -j .comment";
my $srctree = "";
my $objtree = "";
$srctree = "$ENV{'srctree'}/" if (exists($ENV{'srctree'}));
$objtree = "$ENV{'objtree'}/" if (exists($ENV{'objtree'}));
if ($#ARGV != -1) {
print STDERR "usage: $0 takes no parameters\n";
die("giving up\n");
}
my %nmdata = (); # nm data for each object
my %def = (); # all definitions for each name
my %ksymtab = (); # names that appear in __ksymtab_
my %ref = (); # $ref{$name} exists if there is a true external reference to $name
my %export = (); # $export{$name} exists if there is an EXPORT_... of $name
my %nmexception = (
'fs/ext3/bitmap' => 1,
'fs/ext4/bitmap' => 1,
'arch/x86/lib/thunk_32' => 1,
'arch/x86/lib/cmpxchg' => 1,
'arch/x86/vdso/vdso32/note' => 1,
'lib/irq_regs' => 1,
'usr/initramfs_data' => 1,
'drivers/scsi/aic94xx/aic94xx_dump' => 1,
'drivers/scsi/libsas/sas_dump' => 1,
'lib/dec_and_lock' => 1,
'drivers/ide/ide-probe-mini' => 1,
'usr/initramfs_data' => 1,
'drivers/acpi/acpia/exdump' => 1,
'drivers/acpi/acpia/rsdump' => 1,
'drivers/acpi/acpia/nsdumpdv' => 1,
'drivers/acpi/acpia/nsdump' => 1,
'arch/ia64/sn/kernel/sn2/io' => 1,
'arch/ia64/kernel/gate-data' => 1,
'security/capability' => 1,
'fs/ntfs/sysctl' => 1,
'fs/jfs/jfs_debug' => 1,
);
my %nameexception = (
'mod_use_count_' => 1,
'__initramfs_end' => 1,
'__initramfs_start' => 1,
'_einittext' => 1,
'_sinittext' => 1,
'kallsyms_names' => 1,
'kallsyms_num_syms' => 1,
'kallsyms_addresses'=> 1,
'kallsyms_offsets' => 1,
'kallsyms_relative_base'=> 1,
'__this_module' => 1,
'_etext' => 1,
'_edata' => 1,
'_end' => 1,
'__bss_start' => 1,
'_text' => 1,
'_stext' => 1,
'__gp' => 1,
'ia64_unw_start' => 1,
'ia64_unw_end' => 1,
'__init_begin' => 1,
'__init_end' => 1,
'__bss_stop' => 1,
'__nosave_begin' => 1,
'__nosave_end' => 1,
'pg0' => 1,
'vdso_enabled' => 1,
'__stack_chk_fail' => 1,
'VDSO32_PRELINK' => 1,
'VDSO32_vsyscall' => 1,
'VDSO32_rt_sigreturn'=>1,
'VDSO32_sigreturn' => 1,
);
&find(\&linux_objects, '.'); # find the objects and do_nm on them
&list_multiply_defined();
&resolve_external_references();
&list_extra_externals();
exit(0);
sub linux_objects
{
# Select objects, ignoring objects which are only created by
# merging other objects. Also ignore all of modules, scripts
# and compressed. Most conglomerate objects are handled by do_nm,
# this list only contains the special cases. These include objects
# that are linked from just one other object and objects for which
# there is really no permanent source file.
my $basename = $_;
$_ = $File::Find::name;
s:^\./::;
if (/.*\.o$/ &&
! (
m:/built-in.o$:
|| m:arch/x86/vdso/:
|| m:arch/x86/boot/:
|| m:arch/ia64/ia32/ia32.o$:
|| m:arch/ia64/kernel/gate-syms.o$:
|| m:arch/ia64/lib/__divdi3.o$:
|| m:arch/ia64/lib/__divsi3.o$:
|| m:arch/ia64/lib/__moddi3.o$:
|| m:arch/ia64/lib/__modsi3.o$:
|| m:arch/ia64/lib/__udivdi3.o$:
|| m:arch/ia64/lib/__udivsi3.o$:
|| m:arch/ia64/lib/__umoddi3.o$:
|| m:arch/ia64/lib/__umodsi3.o$:
|| m:arch/ia64/scripts/check_gas_for_hint.o$:
|| m:arch/ia64/sn/kernel/xp.o$:
|| m:boot/bbootsect.o$:
|| m:boot/bsetup.o$:
|| m:/bootsect.o$:
|| m:/boot/setup.o$:
|| m:/compressed/:
|| m:drivers/cdrom/driver.o$:
|| m:drivers/char/drm/tdfx_drv.o$:
|| m:drivers/ide/ide-detect.o$:
|| m:drivers/ide/pci/idedriver-pci.o$:
|| m:drivers/media/media.o$:
|| m:drivers/scsi/sd_mod.o$:
|| m:drivers/video/video.o$:
|| m:fs/devpts/devpts.o$:
|| m:fs/exportfs/exportfs.o$:
|| m:fs/hugetlbfs/hugetlbfs.o$:
|| m:fs/msdos/msdos.o$:
|| m:fs/nls/nls.o$:
|| m:fs/ramfs/ramfs.o$:
|| m:fs/romfs/romfs.o$:
|| m:fs/vfat/vfat.o$:
|| m:init/mounts.o$:
|| m:^modules/:
|| m:net/netlink/netlink.o$:
|| m:net/sched/sched.o$:
|| m:/piggy.o$:
|| m:^scripts/:
|| m:sound/.*/snd-:
|| m:^.*/\.tmp_:
|| m:^\.tmp_:
|| m:/vmlinux-obj.o$:
|| m:^tools/:
)
) {
do_nm($basename, $_);
}
$_ = $basename; # File::Find expects $_ untouched (undocumented)
}
sub do_nm
{
my ($basename, $fullname) = @_;
my ($source, $type, $name);
if (! -e $basename) {
printf STDERR "$basename does not exist\n";
return;
}
if ($fullname !~ /\.o$/) {
printf STDERR "$fullname is not an object file\n";
return;
}
($source = $basename) =~ s/\.o$//;
if (-e "$source.c" || -e "$source.S") {
$source = "$objtree$File::Find::dir/$source";
} else {
$source = "$srctree$File::Find::dir/$source";
}
if (! -e "$source.c" && ! -e "$source.S") {
# No obvious source, exclude the object if it is conglomerate
open(my $objdumpdata, "$objdump $basename|")
or die "$objdump $fullname failed $!\n";
my $comment;
while (<$objdumpdata>) {
chomp();
if (/^In archive/) {
# Archives are always conglomerate
$comment = "GCC:GCC:";
last;
}
next if (! /^[ 0-9a-f]{5,} /);
$comment .= substr($_, 43);
}
close($objdumpdata);
if (!defined($comment) || $comment !~ /GCC\:.*GCC\:/m) {
printf STDERR "No source file found for $fullname\n";
}
return;
}
open (my $nmdata, "$nm $basename|")
or die "$nm $fullname failed $!\n";
my @nmdata;
while (<$nmdata>) {
chop;
($type, $name) = (split(/ +/, $_, 3))[1..2];
# Expected types
# A absolute symbol
# B weak external reference to data that has been resolved
# C global variable, uninitialised
# D global variable, initialised
# G global variable, initialised, small data section
# R global array, initialised
# S global variable, uninitialised, small bss
# T global label/procedure
# U external reference
# W weak external reference to text that has been resolved
# V similar to W, but the value of the weak symbol becomes zero with no error.
# a assembler equate
# b static variable, uninitialised
# d static variable, initialised
# g static variable, initialised, small data section
# r static array, initialised
# s static variable, uninitialised, small bss
# t static label/procedures
# w weak external reference to text that has not been resolved
# v similar to w
# ? undefined type, used a lot by modules
if ($type !~ /^[ABCDGRSTUWVabdgrstwv?]$/) {
printf STDERR "nm output for $fullname contains unknown type '$_'\n";
}
elsif ($name =~ /\./) {
# name with '.' is local static
}
else {
$type = 'R' if ($type eq '?'); # binutils replaced ? with R at one point
# binutils keeps changing the type for exported symbols, force it to R
$type = 'R' if ($name =~ /^__ksymtab/ || $name =~ /^__kstrtab/);
$name =~ s/_R[a-f0-9]{8}$//; # module versions adds this
if ($type =~ /[ABCDGRSTWV]/ &&
$name ne 'init_module' &&
$name ne 'cleanup_module' &&
$name ne 'Using_Versions' &&
$name !~ /^Version_[0-9]+$/ &&
$name !~ /^__parm_/ &&
$name !~ /^__kstrtab/ &&
$name !~ /^__ksymtab/ &&
$name !~ /^__kcrctab_/ &&
$name !~ /^__exitcall_/ &&
$name !~ /^__initcall_/ &&
$name !~ /^__kdb_initcall_/ &&
$name !~ /^__kdb_exitcall_/ &&
$name !~ /^__module_/ &&
$name !~ /^__mod_/ &&
$name !~ /^__crc_/ &&
$name ne '__this_module' &&
$name ne 'kernel_version') {
if (!exists($def{$name})) {
$def{$name} = [];
}
push(@{$def{$name}}, $fullname);
}
push(@nmdata, "$type $name");
if ($name =~ /^__ksymtab_/) {
$name = substr($name, 10);
if (!exists($ksymtab{$name})) {
$ksymtab{$name} = [];
}
push(@{$ksymtab{$name}}, $fullname);
}
}
}
close($nmdata);
if ($#nmdata < 0) {
printf "No nm data for $fullname\n"
unless $nmexception{$fullname};
return;
}
$nmdata{$fullname} = \@nmdata;
}
sub drop_def
{
my ($object, $name) = @_;
my $nmdata = $nmdata{$object};
my ($i, $j);
for ($i = 0; $i <= $#{$nmdata}; ++$i) {
if ($name eq (split(' ', $nmdata->[$i], 2))[1]) {
splice(@{$nmdata{$object}}, $i, 1);
my $def = $def{$name};
for ($j = 0; $j < $#{$def{$name}}; ++$j) {
if ($def{$name}[$j] eq $object) {
splice(@{$def{$name}}, $j, 1);
}
}
last;
}
}
}
sub list_multiply_defined
{
foreach my $name (keys(%def)) {
if ($#{$def{$name}} > 0) {
# Special case for cond_syscall
if ($#{$def{$name}} == 1 &&
($name =~ /^sys_/ || $name =~ /^compat_sys_/ ||
$name =~ /^sys32_/)) {
if($def{$name}[0] eq "kernel/sys_ni.o" ||
$def{$name}[1] eq "kernel/sys_ni.o") {
&drop_def("kernel/sys_ni.o", $name);
next;
}
}
printf "$name is multiply defined in :-\n";
foreach my $module (@{$def{$name}}) {
printf "\t$module\n";
}
}
}
}
sub resolve_external_references
{
my ($kstrtab, $ksymtab, $export);
printf "\n";
foreach my $object (keys(%nmdata)) {
my $nmdata = $nmdata{$object};
for (my $i = 0; $i <= $#{$nmdata}; ++$i) {
my ($type, $name) = split(' ', $nmdata->[$i], 2);
if ($type eq "U" || $type eq "w") {
if (exists($def{$name}) || exists($ksymtab{$name})) {
# add the owning object to the nmdata
$nmdata->[$i] = "$type $name $object";
# only count as a reference if it is not EXPORT_...
$kstrtab = "R __kstrtab_$name";
$ksymtab = "R __ksymtab_$name";
$export = 0;
for (my $j = 0; $j <= $#{$nmdata}; ++$j) {
if ($nmdata->[$j] eq $kstrtab ||
$nmdata->[$j] eq $ksymtab) {
$export = 1;
last;
}
}
if ($export) {
$export{$name} = "";
}
else {
$ref{$name} = ""
}
}
elsif ( ! $nameexception{$name}
&& $name !~ /^__sched_text_/
&& $name !~ /^__start_/
&& $name !~ /^__end_/
&& $name !~ /^__stop_/
&& $name !~ /^__scheduling_functions_.*_here/
&& $name !~ /^__.*initcall_/
&& $name !~ /^__.*per_cpu_start/
&& $name !~ /^__.*per_cpu_end/
&& $name !~ /^__alt_instructions/
&& $name !~ /^__setup_/
&& $name !~ /^__mod_timer/
&& $name !~ /^__mod_page_state/
&& $name !~ /^init_module/
&& $name !~ /^cleanup_module/
) {
printf "Cannot resolve ";
printf "weak " if ($type eq "w");
printf "reference to $name from $object\n";
}
}
}
}
}
sub list_extra_externals
{
my %noref = ();
foreach my $name (keys(%def)) {
if (! exists($ref{$name})) {
my @module = @{$def{$name}};
foreach my $module (@module) {
if (! exists($noref{$module})) {
$noref{$module} = [];
}
push(@{$noref{$module}}, $name);
}
}
}
if (%noref) {
printf "\nExternally defined symbols with no external references\n";
foreach my $module (sort(keys(%noref))) {
printf " $module\n";
foreach (sort(@{$noref{$module}})) {
my $export;
if (exists($export{$_})) {
$export = " (export only)";
} else {
$export = "";
}
printf " $_$export\n";
}
}
}
}
| {
"pile_set_name": "Github"
} |
/*
* obackup.c
*
* $Id$
*
* Online & Incremental Backup
*
* This file is part of the OpenLink Software Virtuoso Open-Source (VOS)
* project.
*
* Copyright (C) 1998-2020 OpenLink Software
*
* This project is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; only version 2 of the License, dated June 1991.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include "sqlnode.h"
#include "sqlbif.h"
#include "libutil.h"
#ifdef WIN32
# include "wiservic.h"
#endif
#include "zlib.h"
#include "recovery.h"
#include "security.h"
#ifdef WIN32
#include <windows.h>
#define HAVE_DIRECT_H
#endif
#ifdef HAVE_DIRECT_H
#include <direct.h>
#include <io.h>
#define mkdir(p,m) _mkdir (p)
#define FS_DIR_MODE 0
#define PATH_MAX MAX_PATH
#define get_cwd(p,l) _get_cwd (p,l)
#else
#include <dirent.h>
#define FS_DIR_MODE (S_IRWXU | S_IRWXG)
#endif
#undef DBG_BREAKPOINTS
#undef INC_DEBUG
//#define OBACKUP_TRACE
typedef struct ob_err_ctx_s
{
int oc_inx;
char oc_file[FILEN_BUFSIZ];
} ob_err_ctx_t;
ol_backup_ctx_t bp_ctx = {
{
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0,
0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0
}, /* prefix */
0, /* ts */
0, /* num */
0, /* pages */
0, /* date in sec */
0, /* index of directory */
0, /* written bytes */
};
typedef int (*file_check_f) (caddr_t file, caddr_t ctx, caddr_t dir);
const char* recover_file_prefix = 0;
static dp_addr_t dir_first_page = 0;
static time_t db_bp_date = 0;
static long ol_max_dir_sz = 0;
static dk_hash_t * ol_known_pages = 0;
static int read_backup_header (ol_backup_context_t* ctx, char ** header);
static void backup_path_init ();
static int ob_check_file (caddr_t elt, caddr_t ctx, caddr_t dir);
static int ob_foreach_dir (caddr_t * dirs, caddr_t ctx, ob_err_ctx_t* e_ctx, file_check_f func);
static int ob_get_num_from_file (caddr_t file, caddr_t prefix);
static int try_to_change_dir (ol_backup_context_t * ctx);
static void backup_context_flush (ol_backup_context_t * ctx);
void cpt_over (void);
typedef struct backup_status_s
{
int is_running;
int is_error;
long pages;
long processed_pages;
char errcode[101];
char errstring[1025];
} backup_status_t;
static backup_status_t backup_status;
caddr_t * backup_patha = 0;
static void ol_test_jmp (dk_session_t * ses)
{
SESSTAT_CLR (ses->dks_session, SST_OK);
SESSTAT_SET (ses->dks_session, SST_BROKEN_CONNECTION);
longjmp_splice (&SESSION_SCH_DATA (ses)->sio_write_broken_context, 1);
}
char* format_timestamp (uint32 * ts)
{
static char buf [200];
unsigned int c1, c2, c3, c4;
c1 = (ts[0] & 0xFF000000) >> 24;
c2 = (ts[0] & 0x00FF0000) >> 16;
c3 = (ts[0] & 0x0000FF00) >> 8;
c4 = (ts[0] & 0x000000FF) >> 0;
snprintf (buf, sizeof (buf), "0x%02X%02X-0x%02X-0x%02X", c1, c2, c3, c4);
return box_dv_short_string (buf);
}
#if 0
char* bp_curr_timestamp ()
{
char * ts;
IN_CPT_1;
ts = format_timestamp (&bp_ctx.db_bp_ts);
LEAVE_CPT_1;
return ts;
}
char* bp_curr_date ()
{
if (bp_ctx.db_bp_date)
{
time_t tmp = bp_ctx.db_bp_date;
char * static_str = ctime (&tmp);
return box_dv_short_string (static_str ? static_str : "invalid");
}
else
return box_dv_short_string ("unknown");
}
char* bp_curr_prefix ()
{
char* prefix = 0;
IN_CPT_1;
if (bp_ctx.db_bp_prfx[0])
prefix = box_dv_short_string (bp_ctx.db_bp_prfx);
else
prefix = NEW_DB_NULL;
LEAVE_CPT_1;
return prefix;
}
caddr_t bp_curr_num ()
{
uint32 num;
IN_CPT_1;
num = bp_ctx.db_bp_num;
LEAVE_CPT_1;
return box_num (num);
}
caddr_t bp_curr_inx ()
{
uint32 num;
IN_CPT_1;
num = bp_ctx.db_bp_index;
LEAVE_CPT_1;
return box_num (num);
}
#else
char* bp_curr_timestamp ()
{
char * ts;
ts = format_timestamp (&bp_ctx.db_bp_ts);
return ts;
}
char* bp_curr_date ()
{
if (bp_ctx.db_bp_date)
{
time_t tmp = bp_ctx.db_bp_date;
char * static_str = ctime (&tmp);
return box_dv_short_string (static_str ? static_str : "invalid");
}
else
return box_dv_short_string ("unknown");
}
char* bp_curr_prefix ()
{
char* prefix = 0;
if (bp_ctx.db_bp_prfx[0])
prefix = box_dv_short_string (bp_ctx.db_bp_prfx);
else
prefix = NEW_DB_NULL;
return prefix;
}
caddr_t bp_curr_num ()
{
uint32 num;
num = bp_ctx.db_bp_num;
return box_num (num);
}
caddr_t bp_curr_inx ()
{
uint32 num;
num = bp_ctx.db_bp_index;
return box_num (num);
}
#endif
static
void make_log_error (ol_backup_context_t* ctx, const char* code, const char* msg, ...);
static
buffer_desc_t * incset_make_copy (buffer_desc_t * incset_orig_buf);
static void incset_rollback (ol_backup_context_t* ctx);
static void ctx_clear_backup_files (ol_backup_context_t* ctx);
/* online/incremental backup functions */
int ol_backup_page (it_cursor_t * itc, buffer_desc_t * buf, ol_backup_context_t * ctx);
caddr_t compressed_buffer (buffer_desc_t* buf);
int uncompress_buffer (caddr_t compr, unsigned char* page_buf);
dk_hash_t *
hash_reverse (dk_hash_t* hash)
{
dk_hash_t* new_hash = hash_table_allocate (hash->ht_actual_size);
dk_hash_iterator_t iter;
dp_addr_t origin_dp;
dp_addr_t remap_dp;
uptrlong origin_dp_ptr, remap_dp_ptr;
for (dk_hash_iterator (&iter, hash);
dk_hit_next (&iter, (void**)&origin_dp_ptr, (void**)&remap_dp_ptr);
/* */)
{
origin_dp = (dp_addr_t) origin_dp_ptr;
remap_dp = (dp_addr_t) remap_dp_ptr;
sethash (DP_ADDR2VOID(remap_dp), new_hash, DP_ADDR2VOID(origin_dp));
}
return new_hash;
}
void
ol_write_header (ol_backup_context_t * ctx)
{
if (ctx->octx_is_tail)
return;
/* prefix */
print_long ((long) strlen (ctx->octx_file_prefix), ctx->octx_file);
session_buffered_write (ctx->octx_file, ctx->octx_file_prefix, strlen (ctx->octx_file_prefix));
/* timestamp */
print_long (ctx->octx_timestamp, ctx->octx_file);
/* number of this file */
print_long (ctx->octx_num, ctx->octx_file);
/* size of all backup */
print_long (ctx->octx_last_page, ctx->octx_file);
}
int ol_buf_disk_read (buffer_desc_t* buf)
{
dbe_storage_t* dbs = buf->bd_storage;
OFF_T off;
OFF_T rc;
if (!IS_IO_ALIGN (buf->bd_buffer))
GPF_T1 ("ol_buf_disk_read (): The buffer is not io-aligned");
if (dbs->dbs_disks)
{
ext_ref_t er;
disk_stripe_t *dst = dp_disk_locate (dbs, buf->bd_physical_page, &off, 0, &er);
int fd = dst_fd (dst);
rc = LSEEK (fd, off, SEEK_SET);
if (rc != off)
{
log_error ("Seek failure on stripe %s", dst->dst_file);
GPF_T;
}
rc = read (fd, buf->bd_buffer, PAGE_SZ);
dst_fd_done (dst, fd, &er);
if (rc != PAGE_SZ)
{
log_error ("Read failure on stripe %s", dst->dst_file);
GPF_T;
}
}
else
{
mutex_enter (dbs->dbs_file_mtx);
off = ((OFF_T)buf->bd_physical_page) * PAGE_SZ;
rc = LSEEK (dbs->dbs_fd, off, SEEK_SET);
if (rc != off)
{
log_error ("Seek failure on database %s", dbs->dbs_file);
GPF_T;
}
rc = read (dbs->dbs_fd, (char *) buf->bd_buffer, PAGE_SZ);
if (rc != PAGE_SZ)
{
log_error ("Read failure on database %s", dbs->dbs_file);
GPF_T;
}
mutex_leave (dbs->dbs_file_mtx);
}
return WI_OK;
}
int
ol_write_cfg_page (ol_backup_context_t * ctx)
{
wi_database_t db;
buffer_desc_t * buf = buffer_allocate (DPF_CP_REMAP);
int res = -1;
buf->bd_page = buf->bd_physical_page = 0;
buf->bd_storage = wi_inst.wi_master;
if (WI_ERROR != ol_buf_disk_read (buf))
{
/* fix checkpoint page no */
memcpy (&db, buf->bd_buffer, sizeof (wi_database_t));
/* fix timestamp */
if (!bp_ctx.db_bp_ts)
GPF_T1 ("backup timestamp in not initialized");
strncpy (db.db_bp_prfx, bp_ctx.db_bp_prfx, BACKUP_PREFIX_SZ);
db.db_bp_ts = bp_ctx.db_bp_ts;
db.db_bp_pages = bp_ctx.db_bp_pages;
db.db_bp_num = bp_ctx.db_bp_num;
db.db_bp_date = bp_ctx.db_bp_date;
db.db_bp_index = bp_ctx.db_bp_index;
db.db_bp_wr_bytes = bp_ctx.db_bp_wr_bytes;
memcpy (buf->bd_buffer, &db, sizeof (wi_database_t));
ctx->octx_disable_increment = 1;
res = ol_backup_page (NULL, buf, ctx);
ctx->octx_disable_increment = 0;
}
buffer_free (buf);
return res;
}
FILE * obackup_trace;
void
ol_remap_trace (ol_backup_context_t * ctx)
{
buffer_desc_t * buf = ctx->octx_cpt_set;
if (!obackup_trace)
return;
fprintf (obackup_trace, "Remaps follow:\n");
while (buf)
{
int inx;
for (inx = DP_DATA; inx < PAGE_SZ; inx += 8)
{
dp_addr_t l = LONG_REF (buf->bd_buffer + inx);
dp_addr_t p = LONG_REF (buf->bd_buffer + inx + 4);
if (l)
fprintf (obackup_trace, "L=%ld P=%ld\n", (long)l, (long)p);
}
buf = buf->bd_next;
}
fflush (obackup_trace);
}
int
ol_write_page_set (ol_backup_context_t * ctx, buffer_desc_t * buf, int clr)
{
while (buf)
{
if (clr)
{
memset (buf->bd_buffer + DP_DATA, 0, PAGE_DATA_SZ);
page_set_checksum_init (buf->bd_buffer + DP_DATA);
}
if (-1 == ol_backup_page (NULL, buf, ctx))
return -1;
buf = buf->bd_next;
}
return 0;
}
int
ol_write_sets (ol_backup_context_t * ctx, dbe_storage_t * storage)
{
int res, inx;
res = ol_write_page_set (ctx, ctx->octx_dbs->dbs_incbackup_set, 1);
res = ol_write_page_set (ctx, ctx->octx_cpt_set, 0);
ol_remap_trace (ctx);
res = ol_write_page_set (ctx, ctx->octx_ext_set, 0);
res = ol_write_page_set (ctx, ctx->octx_free_set, 0);
DO_BOX (caddr_t *, elt, inx, ctx->octx_registry)
{
caddr_t name = elt[0];
caddr_t val = elt[1];
buffer_desc_t * em;
if (!DV_STRINGP (name) || !DV_STRINGP (val))
continue;
if (0 == strncmp (name, "__EM:", 5)
|| 0 == strncmp (name, "__EMC:", 6)
|| 0 == strcmp (name, "__sys_ext_map"))
{
dp_addr_t dp = atoi (val);
em = dbs_read_page_set (ctx->octx_dbs, dp, DPF_EXTENT_MAP);
res = ol_write_page_set (ctx, em, 0);
buffer_set_free (em);
}
}
END_DO_BOX;
return res;
}
int
ol_regist_unmark (it_cursor_t * itc, buffer_desc_t * buf, ol_backup_context_t * ctx)
{
uint32* array;
int inx, bit;
dp_addr_t array_page;
IN_DBS (buf->bd_storage);
dbs_locate_incbackup_bit (buf->bd_storage, buf->bd_page,
&array, &array_page, &inx, &bit);
if (array[inx] & 1<<bit)
{
page_set_update_checksum (array, inx, bit);
array[inx] &= ~(1 << bit);
}
LEAVE_DBS (buf->bd_storage);
return 0;
}
int
ol_write_registry (dbe_storage_t * dbs, ol_backup_context_t * ctx, ol_regist_callback_f callback)
{
dp_addr_t first = dbs->dbs_registry;
buffer_desc_t * buf = buffer_allocate (DPF_BLOB);
buf->bd_storage = dbs;
while (first)
{
buf->bd_physical_page = buf->bd_page = first;
if (WI_ERROR == ol_buf_disk_read (buf))
GPF_T1 ("Could not read registry during backup");
if (-1 == (*callback)(0, buf, ctx))
return -1;
first = LONG_REF (buf->bd_buffer + DP_OVERFLOW);
}
buffer_free (buf);
return 0;
}
long ch_c;
long cm_c;
int
ol_backup_page (it_cursor_t * itc, buffer_desc_t * buf, ol_backup_context_t * ctx)
{
ol_backup_context_t * octx = (ol_backup_context_t*)ctx;
dp_addr_t page = buf->bd_physical_page; /* unlike v5, all restores to same phys place, incl. remapped pages */
int backuped = 0;
int write_header_first = 0;
if (octx->octx_is_invalid)
return -1;
again:
if (DP_DELETED != page)
{
caddr_t compr_buf;
compr_buf = compressed_buffer (buf);
if (compr_buf)
{
OFF_T prev_length = ctx->octx_file->dks_bytes_sent;
if (ctx->octx_file->dks_out_fill)
GPF_T1 ("file is not flushed");
CATCH_WRITE_FAIL (ctx->octx_file)
{
if (write_header_first)
ol_write_header (octx);
print_long (page, octx->octx_file);
/* actually needed for testing purposes only */
if (!octx->octx_disable_increment &&
octx->octx_max_wr_bytes &&
((octx->octx_wr_bytes + octx->octx_file->dks_bytes_sent + octx->octx_file->dks_out_fill -1)
> octx->octx_max_wr_bytes))
{
backup_context_flush (octx);
log_warning ("maximum size of directory reached, [" OFF_T_PRINTF_FMT "]",
(OFF_T_PRINTF_DTP) (octx->octx_wr_bytes +
octx->octx_file->dks_bytes_sent +
octx->octx_file->dks_out_fill - 1));
ol_test_jmp(octx->octx_file);
}
print_object (compr_buf, octx->octx_file, 0,0);
dk_free_box (compr_buf);
backuped = page;
ch_c++;
backup_status.processed_pages = ++octx->octx_page_count;
dp_set_backup_flag (wi_inst.wi_master, buf->bd_page, 0);
if (buf->bd_physical_page && buf->bd_physical_page != buf->bd_page)
dp_set_backup_flag (wi_inst.wi_master, buf->bd_physical_page, 0);
backup_context_flush (octx);
if (!octx->octx_disable_increment && (0 == octx->octx_page_count % octx->octx_max_pages))
{
if (backup_context_increment (octx,0) < 0)
return -1;
ol_write_header (octx);
backup_context_flush(octx);
return backuped;
}
}
FAILED
{
FTRUNCATE (tcpses_get_fd (octx->octx_file->dks_session), prev_length);
if (try_to_change_dir (octx))
{
write_header_first = 1;
goto again;
}
octx->octx_is_invalid = 1;
return -1;
}
END_WRITE_FAIL (octx->octx_file);
}
else
{
make_log_error ((ol_backup_context_t*) ctx, COMPRESS_ERR_CODE, COMPRESS_ERR_STR, page);
octx->octx_is_invalid = 1;
return -1;
}
}
return backuped;
}
void
ol_save_context (ol_backup_context_t * ctx)
{
session_flush_1 (ctx->octx_file);
}
static int
is_in_backup_set (ol_backup_context_t * octx, dp_addr_t page)
{
uint32* array;
int inx, bit;
dp_addr_t array_page;
int32 x;
if (octx->octx_is_invalid)
return 0;
IN_DBS (octx->octx_dbs);
dbs_locate_page_bit (octx->octx_dbs, &octx->octx_dbs->dbs_incbackup_set,
page, &array, &array_page, &inx, &bit, V_EXT_OFFSET_INCB_SET, 1);
x = (array[inx] & (1 << bit));
LEAVE_DBS (octx->octx_dbs);
if (x)
return 1;
return 0;
}
dp_addr_t
db_backup_pages (ol_backup_context_t * backup_ctx, dp_addr_t start_dp, dp_addr_t end_dp)
{
ALIGNED_PAGE_BUFFER (bd_buffer);
buffer_desc_t stack_buf;
buffer_desc_t *buf = &stack_buf;
dp_addr_t end_page;
dp_addr_t page_no;
dbe_storage_t * storage = wi_inst.wi_master;
stack_buf.bd_buffer = bd_buffer;
if (!start_dp)
start_dp = 1;
end_page = backup_ctx->octx_last_page;
log_info("Starting online backup from page %ld to %ld, current log is: %s", start_dp, end_page, storage->dbs_log_name);
for (page_no = start_dp; page_no < end_page; page_no++)
{
dp_addr_t log_page = 0;
if (0 == page_no%10000)
log_info("Backing up page %ld", page_no);
if (page_no == end_page - 1)
goto backup; /* must always write this to make sure restored is at least as long as original */
if (gethash (DP_ADDR2VOID(page_no), backup_ctx->octx_dbs->dbs_cpt_remap))
continue; /* there is a cpt remap page for this, so do not write this */
log_page = (uptrlong) gethash (DP_ADDR2VOID(page_no), backup_ctx->octx_cpt_remap_r);
if (!is_in_backup_set (backup_ctx, log_page ? log_page : page_no))
continue;
backup:
if (obackup_trace)
fprintf (obackup_trace, "W L=%ld P=%ld\n", (long)log_page, (long)page_no);
buf->bd_page = log_page ? log_page : page_no;
buf->bd_physical_page = page_no;
buf->bd_storage = storage;
if (WI_ERROR == ol_buf_disk_read (buf))
make_log_error (backup_ctx, READ_ERR_CODE, READ_ERR_STR, page_no);
else
{
ol_backup_page (NULL, buf, backup_ctx);
if (backup_ctx->octx_is_invalid)
return -1;
}
}
/* these ones will be always written to the end backup file */
if (-1 == ol_write_sets (backup_ctx, storage))
return -1;
if (-1 == ol_write_registry (backup_ctx->octx_dbs, backup_ctx, ol_backup_page))
return -1;
return 0;
}
void
backup_context_flush (ol_backup_context_t * ctx)
{
session_flush_1 (ctx->octx_file);
}
void
backup_context_free (ol_backup_context_t * ctx)
{
buffer_desc_t * incset = ctx->octx_incset;
if (ctx->octx_file)
{
fd_close (tcpses_get_fd (ctx->octx_file->dks_session),ctx->octx_curr_file);
PrpcSessionFree (ctx->octx_file);
}
dk_free_box (ctx->octx_error_code);
dk_free_box (ctx->octx_error_string);
buffer_set_free (incset);
dk_free_tree (list_to_array (ctx->octx_backup_files));
buffer_set_free (ctx->octx_free_set);
buffer_set_free (ctx->octx_ext_set);
buffer_set_free (ctx->octx_cpt_set);
dk_free_tree ((caddr_t) ctx->octx_registry);
if (ctx->octx_cpt_remap_r)
hash_table_free (ctx->octx_cpt_remap_r);
dk_free (ctx, sizeof (ol_backup_context_t));
}
int
backup_context_increment (ol_backup_context_t* ctx, int is_restore)
{
int fd;
/* needed for marking backup file RW under XP/2000 */
char curr_file[FILEN_BUFSIZ];
long new_num = ctx->octx_num + 1;
ctx->octx_is_tail = 0;
memcpy (curr_file, ctx->octx_curr_file, FILEN_BUFSIZ);
again:
snprintf (ctx->octx_curr_file, FILEN_BUFSIZ, "%s/%s%ld.bp", ctx->octx_backup_patha[ctx->octx_curr_dir], ctx->octx_file_prefix, new_num);
fd = fd_open (ctx->octx_curr_file,
is_restore ? OPEN_FLAGS_RO : (OPEN_FLAGS | O_TRUNC));
if (fd >= 0)
{
ctx->octx_num = new_num;
dk_set_push (&ctx->octx_backup_files, box_string (ctx->octx_curr_file));
if (ctx->octx_file)
{
session_flush_1 (ctx->octx_file);
ctx->octx_wr_bytes += ctx->octx_file->dks_bytes_sent;
dir_first_page = ctx->octx_curr_page;
fd_close (tcpses_get_fd (ctx->octx_file->dks_session), curr_file);
tcpses_set_fd (ctx->octx_file->dks_session, fd);
ctx->octx_file->dks_bytes_sent = 0;
}
else
{
ctx->octx_file = dk_session_allocate (SESCLASS_TCPIP);
tcpses_set_fd (ctx->octx_file->dks_session, fd);
}
}
else
{
if (is_restore && (++ctx->octx_curr_dir < BOX_ELEMENTS (ctx->octx_backup_patha)) )
goto again;
ctx->octx_is_invalid = 1;
return -1;
}
if (!is_restore)
ctx->octx_wr_bytes = 0;
return fd;
}
void
store_backup_context (ol_backup_context_t* ctx)
{
/* log_info ("clear hash"); */
clrhash (ctx->known);
strncpy ( bp_ctx.db_bp_prfx, ctx->octx_file_prefix, BACKUP_PREFIX_SZ);
bp_ctx.db_bp_ts = ctx->octx_timestamp;
bp_ctx.db_bp_num = ctx->octx_num;
bp_ctx.db_bp_pages = ctx->octx_page_count;
bp_ctx.db_bp_date = (dp_addr_t) db_bp_date;
bp_ctx.db_bp_index = ctx->octx_curr_dir;
bp_ctx.db_bp_wr_bytes = ctx->octx_wr_bytes;
}
int
try_to_restore_backup_context (ol_backup_context_t* ctx)
{
if (!bp_ctx.db_bp_ts)
return 0;
else
{
char * ts_str;
strncpy (ctx->octx_file_prefix, bp_ctx.db_bp_prfx, BACKUP_PREFIX_SZ);
ctx->octx_timestamp = bp_ctx.db_bp_ts;
ctx->octx_num = bp_ctx.db_bp_num;
/* ctx->octx_page_count = bp_ctx.db_bp_pages; */
ctx->octx_page_count = 0;
ctx->octx_curr_dir = bp_ctx.db_bp_index;
ctx->octx_wr_bytes = bp_ctx.db_bp_wr_bytes;
ts_str = format_timestamp (&ctx->octx_timestamp);
#ifdef DEBUG
log_info ("Found backup info - prefix[%s], ts[%s], num[%ld], diridx[%ld]",
ctx->octx_file_prefix, ts_str, ctx->octx_num, ctx->octx_curr_dir);
#endif
dk_free_box (ts_str);
return 1;
}
}
ol_backup_context_t*
backup_context_allocate(const char* fileprefix,
long pages, long timeout, caddr_t* backup_path_arr, caddr_t *err_ret)
{
ol_backup_context_t* ctx;
int fd;
int restored;
if (pages < MIN_BACKUP_PAGES)
{
*err_ret = srv_make_new_error ("42000", PAGE_NUMBER_ERR_CODE, "Number of backup pages is less than %ld", (long)MIN_BACKUP_PAGES);
return NULL;
}
if (timeout < 0)
{
*err_ret = srv_make_new_error ("42000", TIMEOUT_NUMBER_ERR_CODE, "Timeout can not be negative");
return NULL;
}
if (strlen(fileprefix) > FILEN_BUFSIZ)
{
*err_ret = srv_make_new_error ("42000", FILE_SZ_ERR_CODE, "Prefix name too long");
return NULL;
}
ctx = (ol_backup_context_t*) dk_alloc (sizeof (ol_backup_context_t));
memset (ctx, 0, sizeof (ol_backup_context_t));
ctx->octx_backup_patha = backup_path_arr;
ctx->octx_max_wr_bytes = (OFF_T) ol_max_dir_sz;
if (!ol_known_pages)
ol_known_pages = hash_table_allocate (101);
ctx->known = ol_known_pages;
ctx->octx_max_pages = pages;
ctx->octx_incset = incset_make_copy (wi_inst.wi_master->dbs_incbackup_set);
restored = try_to_restore_backup_context (ctx);
if (!restored)
memcpy (ctx->octx_file_prefix, fileprefix, strlen (fileprefix));
fd = backup_context_increment (ctx,0);
if (fd >= 0)
{
ctx->octx_dbs = wi_inst.wi_master;
if (!restored)
ctx->octx_timestamp = sqlbif_rnd (&rnd_seed_b) + approx_msec_real_time ();
ctx->octx_cpt_remap_r = hash_reverse (ctx->octx_dbs->dbs_cpt_remap);
if (!ctx->octx_cpt_remap_r)
GPF_T1 ("wrong hash table");
if (timeout)
ctx->octx_deadline = get_msec_real_time () + timeout;
ctx->octx_last_page = ctx->octx_dbs->dbs_n_pages;
return ctx;
}
else
{
dk_free (ctx, sizeof (ol_backup_context_t));
*err_ret = srv_make_new_error ("42000", BACKUP_FILE_CR_ERR_CODE, "Could not create backup file %s", ctx->octx_curr_file);
return NULL; /* keeps compiler happy */
}
}
#define CHECK_ERROR(ctx, error) \
if (ctx->octx_error) \
goto error;
#define LOG_ERROR(ctx, x, error) \
log_error x; \
ctx->octx_error = 1;\
ctx->octx_error_string = make_error_string x; \
ctx->octx_error_code = box_string (FILE_ERR_CODE); \
goto error;
static
char * make_error_string (char * msg, ...)
{
char * message;
char buf[1025];
va_list list;
va_start (list, msg);
vsnprintf (buf, 1024, msg, list);
va_end (list);
message = dk_alloc_box (strlen (buf)+1, DV_STRING);
strcpy_box_ck (message, buf);
return message;
}
static
void make_log_error (ol_backup_context_t* ctx, const char* code, const char* msg, ...)
{
char temp[2000];
char* buf = temp;
va_list list;
if (ctx->octx_error)
return;
buf[0]='['; buf++;
strcpy_size_ck (buf, code, sizeof (temp) - (buf - temp));
buf+=strlen(code);
buf[0]=']'; buf[1]=' '; buf+=2;
va_start (list, msg);
vsnprintf (buf, sizeof (temp) - (buf - temp), msg, list);
va_end (list);
#ifdef TEST_ERR_REPORT
log_error (temp);
return;
#endif
ctx->octx_error = 1;
ctx->octx_error_code = box_string (code);
ctx->octx_error_string = box_string (temp);
log_error (temp);
return;
}
#ifdef TEST_ERR_REPORT
caddr_t
bif_test_error (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
ol_backup_context_t * ctx = dk_alloc (sizeof (ol_backup_context_t));
memset (ctx, 0, sizeof (ol_backup_context_t));
make_log_error (ctx, COMPRESS_ERR_CODE, COMPRESS_ERR_STR, 14);
make_log_error (ctx, READ_ERR_CODE, READ_ERR_STR, 14);
make_log_error (ctx, STORE_CTX_ERR_CODE, STORE_CTX_ERR_STR);
make_log_error (ctx, READ_CTX_ERR_CODE, READ_CTX_ERR_STR);
return NEW_DB_NULL;
}
#endif
#ifdef INC_DEBUG
caddr_t
bif_backup_rep (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
char temp [128];
long cnt = 0;
dk_hash_iterator_t hit;
ptrlong k,v;
dbe_storage_t * dbs = wi_inst.wi_master;
dp_addr_t page = dbs->dbs_cp_remap_pages ? (dp_addr_t) (unsigned long) dbs->dbs_cp_remap_pages->data : 0;
for (dk_hash_iterator (&hit, wi_inst.wi_master->dbs_cpt_remap);
dk_hit_next (&hit, (void**) &k, (void**) &v);
/* */)
{
cnt++;
}
snprintf (temp, sizeof (temp), "remap pages = %ld [%ld]", cnt, page);
return box_dv_short_string (temp);
}
#endif
static int try_to_change_dir (ol_backup_context_t * ctx)
{
if (((ctx->octx_curr_dir)+1) < BOX_ELEMENTS (ctx->octx_backup_patha))
{
++ctx->octx_curr_dir;
if (0 < backup_context_increment (ctx, 0))
return 1;
}
return 0;
}
#define OB_IN_CPT(need_mtx,qi) \
if (need_mtx) \
IN_CPT (qi->qi_trx); \
else \
{ \
IN_TXN; \
lt_threads_dec_inner (qi->qi_trx); \
LEAVE_TXN; \
}
#define OB_LEAVE_CPT(need_mtx,qi) \
if (need_mtx) \
{ \
IN_TXN; \
cpt_over (); \
LEAVE_TXN; \
LEAVE_CPT(qi->qi_trx); \
} \
else \
{ \
IN_TXN; \
lt_threads_inc_inner (qi->qi_trx); \
LEAVE_TXN; \
}
#define OB_LEAVE_CPT_1(need_mtx,qi) \
if (need_mtx) \
{ \
LEAVE_CPT(qi->qi_trx); \
} \
else \
{ \
IN_TXN; \
lt_threads_inc_inner (qi->qi_trx); \
LEAVE_TXN; \
}
long ol_backup (const char* prefix, long pages, long timeout, caddr_t* backup_path_arr, query_instance_t *qi)
{
dbe_storage_t * dbs = wi_inst.wi_master;
dk_session_t * ses;
int need_mtx = !srv_have_global_lock(THREAD_CURRENT_THREAD);
ol_backup_context_t * ctx;
long _pages;
buffer_desc_t *cfg_buf = buffer_allocate (DPF_CP_REMAP);
wi_database_t db;
char * log_name;
caddr_t err = NULL;
#ifdef OBACKUP_TRACE
obackup_trace = fopen ("obackup.out", "a");
#endif
OB_IN_CPT (need_mtx,qi);
log_name = sf_make_new_log_name (wi_inst.wi_master);
IN_TXN;
dbs_checkpoint (log_name, CPT_INC_RESET);
cpt_over ();
LEAVE_TXN;
cfg_buf->bd_page = cfg_buf->bd_physical_page = 0;
cfg_buf->bd_storage = wi_inst.wi_master;
if (WI_ERROR == ol_buf_disk_read (cfg_buf))
GPF_T1 ("obackup can't read cfg page");
memcpy (&db, cfg_buf->bd_buffer, sizeof (wi_database_t));
buffer_free (cfg_buf);
ctx = backup_context_allocate (prefix, pages, timeout, backup_path_arr, &err);
if (err)
{
OB_LEAVE_CPT_1 (need_mtx,qi);
sqlr_resignal (err);
}
ctx->octx_dbs = wi_inst.wi_master;
_pages = ctx->octx_page_count;
IN_DBS (dbs);
ctx->octx_free_set = dbs_read_page_set (dbs, db.db_free_set, DPF_FREE_SET);
ctx->octx_ext_set = dbs_read_page_set (dbs, db.db_extent_set, DPF_EXTENT_SET);
if (db.db_checkpoint_map)
ctx->octx_cpt_set = dbs_read_page_set (dbs, db.db_checkpoint_map, DPF_CP_REMAP);
LEAVE_DBS (dbs);
#ifdef OBACKUP_TRACE
fprintf (obackup_trace, "\n\n\Bakup file prefix %s\n", prefix);
#endif
ses = dbs_read_registry (ctx->octx_dbs, qi->qi_client);
ctx->octx_registry = (caddr_t *) read_object (ses);
dk_free_box ((caddr_t)ses);
memset (&backup_status, 0, sizeof (backup_status_t));
backup_status.is_running = 1;
backup_status.pages = dbs_count_incbackup_pages (wi_inst.wi_master);
time (&db_bp_date);
dir_first_page = 0;
CATCH_WRITE_FAIL (ctx->octx_file)
{
ol_write_header (ctx);
backup_context_flush (ctx);
}
FAILED
{
LOG_ERROR (ctx, ("Backup file [%s] writing error", ctx->octx_curr_file), error);
}
db_backup_pages (ctx, 0, 0);
CHECK_ERROR (ctx, error);
/* flushed, so out_fill does not needed */
ctx->octx_wr_bytes += ctx->octx_file->dks_bytes_sent;
store_backup_context (ctx);
CHECK_ERROR (ctx, error);
ol_write_cfg_page (ctx);
CHECK_ERROR (ctx, error);
store_backup_context (ctx);
CHECK_ERROR (ctx, error);
IN_DBS (dbs);
dbs_write_page_set (dbs, dbs->dbs_incbackup_set);
LEAVE_DBS (dbs);
#if 0
DO_SET (dbe_storage_t *, dbs, &wi_inst.wi_master_wd->wd_storage)
{
if (dbs->dbs_slices)
{
#ifdef OBACKUP_TRACE
fprintf (obackup_trace, "\n\n\DBS: %s\n", dbs->dbs_name);
#endif
}
}
END_DO_SET ();
#endif
if (obackup_trace)
{
fflush (obackup_trace);
fclose (obackup_trace);
obackup_trace = NULL;
}
log_info ("Backed up pages: [%ld]", ctx->octx_page_count - _pages);
#ifdef DEBUG
log_info ("Log = %s", wi_inst.wi_master->dbs_log_name);
#endif
OB_LEAVE_CPT_1 (need_mtx,qi);
_pages = ctx->octx_page_count - _pages;
backup_context_free(ctx);
backup_status.is_running = 0;
return _pages;
error:
db_bp_date = 0;
incset_rollback (ctx);
ctx_clear_backup_files (ctx);
strncpy (backup_status.errcode, ctx->octx_error_code, 100);
strncpy (backup_status.errstring, ctx->octx_error_string, 1024);
backup_status.is_error = 1;
backup_status.is_running = 0;
OB_LEAVE_CPT_1 (need_mtx,qi);
backup_context_free (ctx);
sqlr_new_error ("42000", backup_status.errcode, "%s", backup_status.errstring);
return 0; /* keeps compiler happy */
}
void bp_sec_user_check (query_instance_t * qi)
{
if (!sec_user_has_group_name ("BACKUP", qi->qi_u_id) &&
!sec_user_has_group_name ("dba", qi->qi_u_id))
{
user_t *u = sec_id_to_user (qi->qi_u_id);
sqlr_new_error ("42000", USER_PERM_ERR_CODE , "user %s is not authorized to make online backup", u->usr_name);
}
}
void
bp_sec_check_prefix (query_instance_t * qi, char *file_prefix)
{
char * s;
if (!file_prefix[0])
sqlr_new_error ("42000", FILE_FORM_ERR_CODE , "Backup prefix must contains at least one char");
if (file_prefix[0] == '/')
sqlr_new_error ("42000", FILE_FORM_ERR_CODE, "Absolute path as backup prefix is not allowed");
s = strchr (file_prefix, ':');
if (s)
sqlr_new_error ("42000", FILE_FORM_ERR_CODE , "Semicolon in backup prefix is not allowed");
s = strchr (file_prefix, '.');
while (s)
{
if (s[1] == '.')
sqlr_new_error ("42000", FILE_FORM_ERR_CODE , "\"..\" substring in backup prefix is not allowed");
s = strchr (s + 1, '.');
}
}
static
caddr_t bif_backup_report (caddr_t* qst, caddr_t* err_ret, state_slot_t** args)
{
dk_set_t s = 0;
dk_set_push (&s, box_string ("seq"));
dk_set_push (&s, box_num (bp_ctx.db_bp_num));
dk_set_push (&s, box_string ("done"));
dk_set_push (&s, box_num (backup_status.processed_pages));
dk_set_push (&s, box_string ("all"));
dk_set_push (&s, box_num (backup_status.pages));
return list_to_array (dk_set_nreverse (s));
}
static
caddr_t* bif_backup_dirs_arg (caddr_t* qst, state_slot_t** args, int num, const char* func_name)
{
caddr_t * ba = (caddr_t*) bif_arg (qst, args, num, func_name);
if (DV_ARRAY_OF_POINTER == DV_TYPE_OF (ba))
{
int inx;
DO_BOX (caddr_t, elt, inx, ba)
{
if (!IS_STRING_DTP(DV_TYPE_OF(elt)))
goto err;
}
END_DO_BOX;
return ba;
}
err:
sqlr_new_error ("42001", BACKUP_DIR_ARG_ERR_CODE, "The argument %d of %s must be array of strings", num+1, func_name);
return 0; /* keeps compiler happy */
}
caddr_t
bif_backup_online (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
query_instance_t *qi = (query_instance_t *) qst;
caddr_t file_prefix;
long pages ;
long timeout = 0;
long res = 0;
caddr_t * backup_path_arr = backup_patha;
ob_err_ctx_t e_ctx;
memset (&e_ctx, 0, sizeof (ob_err_ctx_t));
QI_CHECK_STACK (qi, &qi, OL_BACKUP_STACK_MARGIN);
QR_RESET_CTX
{
file_prefix = bif_string_arg (qst, args, 0, "backup_online");
pages = (long) bif_long_arg (qst, args, 1, "backup_online");
bp_sec_user_check (qi);
bp_sec_check_prefix (qi, file_prefix);
/* timeout feature disabled */
/* if (BOX_ELEMENTS (args) > 2)
timeout = (long) bif_long_arg (qst, args, 2, "backup_online"); */
if (BOX_ELEMENTS (args) > 3)
backup_path_arr = bif_backup_dirs_arg (qst, args, 3, "backup_online");
if (-1 == ob_foreach_dir (backup_path_arr, file_prefix, &e_ctx, ob_check_file))
sqlr_new_error ("42000", DIR_CLEARANCE_ERR_CODE, "directory %s contains backup file %s, backup aborted", backup_path_arr[e_ctx.oc_inx], e_ctx.oc_file);
ch_c = cm_c = 0;
res = ol_backup (file_prefix, pages, timeout, backup_path_arr, qi);
}
QR_RESET_CODE
{
du_thread_t *self = THREAD_CURRENT_THREAD;
caddr_t* err = (caddr_t*) thr_get_error_code (self);
POP_QR_RESET;
if ((DV_TYPE_OF (err) == DV_ARRAY_OF_POINTER) &&
BOX_ELEMENTS (err) == 3)
{
backup_status.is_error = 1;
strncpy (backup_status.errcode, err[1], 100);
strncpy (backup_status.errstring, err[2], 1024);
}
sqlr_resignal ((caddr_t)err);
}
END_QR_RESET;
return box_num (res);
}
static
int ob_unlink_file (caddr_t elt, caddr_t ctx, caddr_t dir)
{
if (0 < ob_get_num_from_file (elt, ctx))
{
char path[PATH_MAX+1];
char *path_tail = path;
memset (path, 0, PATH_MAX+1);
if ((strlen (dir) + strlen (elt) + 1)>PATH_MAX)
return -1;
strcpy (path, dir);
path_tail = path + strlen(path);
*path_tail = '/'; ++path_tail;
while (elt[0])
*(path_tail++) = *(elt++);
unlink (path);
}
return 0;
}
static caddr_t
bif_backup_dirs_clear (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
caddr_t * dirs = backup_patha;
caddr_t prefix = bif_string_arg (qst, args, 0, "backup_dirs_clear");
ob_err_ctx_t e_ctx;
memset (&e_ctx, 0, sizeof (ob_err_ctx_t));
if (BOX_ELEMENTS (args) > 1)
dirs = bif_backup_dirs_arg (qst, args, 1, "backup_dirs_clear");
ob_foreach_dir (dirs, prefix, &e_ctx, ob_unlink_file);
return NEW_DB_NULL;
}
static caddr_t
bif_backup_def_dirs (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
return box_copy_tree ((box_t) backup_patha);
}
static caddr_t
bif_backup_context_clear (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
query_instance_t * qi = (query_instance_t*) qst;
dbe_storage_t * dbs = wi_inst.wi_master;
int make_cp = 1;
int need_mtx = !srv_have_global_lock(THREAD_CURRENT_THREAD);
if (BOX_ELEMENTS (args) > 0)
make_cp = (int) bif_long_arg (qst, args, 0, "backup_context_clear");
bp_sec_user_check (qi);
OB_IN_CPT (need_mtx, qi);
memset (&bp_ctx, 0, sizeof (ol_backup_ctx_t));
{
char * log_name = sf_make_new_log_name (wi_inst.wi_master);
IN_TXN;
dbs_checkpoint (log_name, CPT_INC_RESET);
LEAVE_TXN;
}
{
buffer_desc_t * is = dbs->dbs_incbackup_set;
buffer_desc_t * fs = dbs_read_page_set (wi_inst.wi_master, wi_inst.wi_master->dbs_free_set->bd_page, DPF_FREE_SET);
while (fs && is)
{
memcpy (is->bd_buffer + DP_DATA, fs->bd_buffer + DP_DATA, PAGE_DATA_SZ);
page_set_checksum_init (is->bd_buffer + DP_DATA);
fs = fs->bd_next;
is = is->bd_next;
}
if (fs || is)
log_error ("free set and incbackup set were found of uneven length in reset of backup ctx. Only partly done. Should restore db from crash dump.");
ol_write_registry (wi_inst.wi_master, NULL, ol_regist_unmark);
{
dk_hash_iterator_t hit;
void *dp, *remap_dp;
dk_hash_iterator (&hit, dbs->dbs_cpt_remap);
while (dk_hit_next (&hit, &dp, &remap_dp))
dp_set_backup_flag (dbs, (dp_addr_t) (ptrlong) remap_dp, 0);
}
/* cp remap pages will be ignored, so do not leave trash
for dbs_count_pageset_items_2 */
DO_SET (caddr_t, _page, &dbs->dbs_cp_remap_pages)
{
dp_set_backup_flag (dbs, (dp_addr_t)(ptrlong) _page, 0);
}
END_DO_SET();
buffer_set_free (fs);
}
dbs_write_page_set (dbs, dbs->dbs_incbackup_set);
dbs_write_cfg_page (dbs, 0);
OB_LEAVE_CPT (need_mtx, qi);
return NEW_DB_NULL;
}
static caddr_t
bif_backup_context_info_get (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
caddr_t param_name = bif_string_arg (qst, args, 0, "backup_context_info_get");
caddr_t param = 0;
/* param parsing section */
if (!stricmp (param_name, "prefix"))
param = bp_curr_prefix();
else if (!stricmp (param_name, "date"))
param = bp_curr_date ();
else if (!stricmp (param_name, "ts"))
param = bp_curr_timestamp ();
else if (!stricmp (param_name, "num"))
return bp_curr_num (); /* zero is allowed, so return here */
else if (!stricmp (param_name, "dir_inx"))
return bp_curr_inx ();
else if (!stricmp (param_name, "run"))
return box_num (backup_status.is_running);
else if (!stricmp (param_name, "errorc"))
{
if (backup_status.is_error)
return box_string (backup_status.errcode);
else
return NEW_DB_NULL;
}
else if (!stricmp (param_name, "errors"))
{
if (backup_status.is_error)
return box_string (backup_status.errstring);
else
return NEW_DB_NULL;
}
if (param)
return param;
else
return NEW_DB_NULL;
}
static caddr_t
bif_backup_online_header_get (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
char * fileprefix = bif_string_arg (qst, args, 0, "backup_online_header_get");
long num = (long) bif_long_arg (qst, args, 1, "backup_online_header_get") - 1;
ol_backup_context_t* ctx;
char * header = 0;
int fd;
if (box_length (fileprefix) - 1 > BACKUP_PREFIX_SZ)
sqlr_new_error ("42000", FILE_SZ_ERR_CODE , "file prefix too long");
ctx = (ol_backup_context_t*) dk_alloc (sizeof (ol_backup_context_t));
memset (ctx, 0, sizeof (ol_backup_context_t));
memcpy (ctx->octx_file_prefix, fileprefix, strlen (fileprefix));
ctx->octx_num = num;
ctx->octx_backup_patha = backup_patha;
fd = backup_context_increment (ctx,1);
if (fd < 0)
goto fin;
CATCH_READ_FAIL (ctx->octx_file)
{
read_backup_header (ctx, &header);
}
FAILED
{
}
END_READ_FAIL (ctx->octx_file);
fin:
backup_context_free (ctx);
if (header)
return header;
sqlr_new_error ("42000", FILE_OPEN_ERR_CODE , "could not open backup file with prefix %s num %ld", fileprefix, num);
return 0; /* keeps compiler happy */
}
/* restore */
ol_backup_context_t*
restore_context_allocate(const char* fileprefix)
{
ol_backup_context_t* ctx;
int fd;
ctx = (ol_backup_context_t*) dk_alloc (sizeof (ol_backup_context_t));
memset (ctx, 0, sizeof (ol_backup_context_t));
memcpy (ctx->octx_file_prefix, fileprefix, strlen (fileprefix));
ctx->octx_backup_patha = backup_patha;
fd = backup_context_increment (ctx,1);
if (fd > 0)
{
int db_exists = 0;
db_read_cfg (NULL, "-r");
cp_buf = buffer_allocate (DPF_CP_REMAP);
ctx->octx_dbs = dbs_from_file ("master", NULL, DBS_RECOVER, &db_exists);
if (db_exists)
{
log_error ("Remove database file before recovery");
/* leak, but program shuts down anyway */
return 0;
}
return ctx;
}
else
{
dk_free (ctx, sizeof (ol_backup_context_t));
return NULL;
}
}
void
buf_disk_raw_write (buffer_desc_t* buf)
{
dbe_storage_t* dbs = buf->bd_storage;
dp_addr_t dest = buf->bd_physical_page;
OFF_T off;
OFF_T rc;
ext_ref_t er;
if (!IS_IO_ALIGN (buf->bd_buffer))
GPF_T1 ("buf_disk_raw_write (): The buffer is not io-aligned");
if (dbs->dbs_disks)
{
disk_stripe_t *dst;
int fd;
OFF_T rc;
IN_DBS (dbs);
while (dest >= dbs->dbs_n_pages)
{
rc = dbs_seg_extend (dbs, EXTENT_SZ);
if (rc != EXTENT_SZ)
{
log_error ("Cannot extend database, please free disk space and try again.");
call_exit (-1);
}
dbs_extend_ext_cache (dbs);
}
LEAVE_DBS (dbs);
dst = dp_disk_locate (dbs, dest, &off, 1, &er);
fd = dst_fd (dst);
rc = LSEEK (fd, off, SEEK_SET);
if (rc != off)
{
log_error ("Seek failure on stripe %s rc=" BOXINT_FMT " errno=%d off=" BOXINT_FMT ".", dst->dst_file, rc, errno, off);
GPF_T;
}
rc = write (fd, buf->bd_buffer, PAGE_SZ);
if (rc != PAGE_SZ)
{
log_error ("Write failure on stripe %s", dst->dst_file);
GPF_T;
}
dst_fd_done (dst, fd, &er);
}
else
{
OFF_T off_dest = ((OFF_T) dest) * PAGE_SZ;
if (off_dest >= dbs->dbs_file_length)
{
/* Fill the gap. */
LSEEK (dbs->dbs_fd, 0, SEEK_END);
while (dbs->dbs_file_length <= off_dest)
{
if (PAGE_SZ != write (dbs->dbs_fd, (char *)(buf->bd_buffer),
PAGE_SZ))
{
log_error ("Write failure on database %s", dbs->dbs_file);
GPF_T;
}
dbs->dbs_file_length += PAGE_SZ;
}
}
else
{
off = ((OFF_T)buf->bd_physical_page) * PAGE_SZ;
if (off != (rc = LSEEK (dbs->dbs_fd, off, SEEK_SET)))
{
log_error ("Seek failure on database %s rc=" BOXINT_FMT " errno=%d off=" BOXINT_FMT ".", dbs->dbs_file, rc, errno, off);
GPF_T;
}
rc = write (dbs->dbs_fd, (char *)(buf->bd_buffer), PAGE_SZ);
if (rc != PAGE_SZ)
{
log_error ("Write failure on database %s", dbs->dbs_file);
GPF_T;
}
}
}
}
int ob_just_report = 0;
static int
read_backup_header (ol_backup_context_t* ctx, char ** header)
{
long len;
char prefix[FILEN_BUFSIZ];
uint32 timestamp;
char * ts_str;
long num;
/* prefix */
len = read_long (ctx->octx_file);
if ((len == -1) || (len >= FILEN_BUFSIZ))
{
log_error ("Backup file %s is corrupted", ctx->octx_curr_file);
return 0;
}
session_buffered_read (ctx->octx_file, prefix, len);
prefix[len] = 0;
if (!ob_just_report && strcmp (prefix, ctx->octx_file_prefix))
{
if (!header) log_error ("Prefix [%s] is wrong, should be [%s]", ctx->octx_file_prefix, prefix);
return 0;
}
/* timestamp */
timestamp = read_long (ctx->octx_file);
if (!ctx->octx_timestamp)
ctx->octx_timestamp = timestamp;
else
if (!ob_just_report && (timestamp != ctx->octx_timestamp))
{
if (!header)
log_error ("Timestamp [%lx] is wrong in file %s", timestamp, ctx->octx_curr_file);
return 0;
}
/* number of this file */
num = read_long (ctx->octx_file);
if (!ob_just_report && (ctx->octx_num != num))
{
if (!header)
log_error ("Number of file %s differs from internal number [%ld]", ctx->octx_curr_file, num);
return 0;
}
/* size of all backup */
ctx->octx_last_page = read_long (ctx->octx_file);
ts_str = format_timestamp (&ctx->octx_timestamp);
if (!header)
log_info ("--> Backup file # %ld [%s]", num, ts_str);
if (!header && ob_just_report)
log_info ("----> %s %s %ld %ld", prefix, ts_str, num, ctx->octx_last_page);
if (header)
{
char tmpstr_s[255];
char * tmpstr = tmpstr_s;
memset (tmpstr, 0, 255);
memcpy (tmpstr, prefix, len);
tmpstr+=len;
*(tmpstr++) = ':';
memcpy (tmpstr, ts_str, strlen (ts_str));
tmpstr+=strlen(ts_str);
*(tmpstr++) = ':';
if (num > 999999)
num = 999999;
snprintf (tmpstr, 254 - strlen (tmpstr), "%ld", num);
header[0] = box_dv_short_string (tmpstr_s);
}
dk_free_box (ts_str);
return 1;
}
#ifdef DBG_BREAKPOINTS
static int ol_breakpoint()
{
return 0;
}
#endif
static int
check_configuration (buffer_desc_t * buf)
{
caddr_t page_buf = (caddr_t)buf->bd_buffer;
wi_database_t db;
memcpy (&db, page_buf, sizeof (wi_database_t));
if (dbs_byte_order_cmp (db.db_byte_order))
{
log_error ("The backup was produced on a system with different byte order. Exiting.");
return -1;
}
((wi_database_t *)page_buf)->db_stripe_unit = buf->bd_storage->dbs_stripe_unit;
return 0;
}
static int
insert_page (ol_backup_context_t* ctx, dp_addr_t page_dp)
{
ALIGNED_PAGE_BUFFER (page_buf);
buffer_desc_t buf;
caddr_t compr_buf;
compr_buf = (caddr_t) read_object (ctx->octx_file);
/* session_buffered_read (ctx->octx_file, page_buf, PAGE_SZ); */
if (!compr_buf || (Z_OK != uncompress_buffer (compr_buf, page_buf)))
log_error ("Could not recover page %ld from backup file %s", page_dp, ctx->octx_curr_file);
buf.bd_page = buf.bd_physical_page = page_dp;
buf.bd_buffer = page_buf;
buf.bd_storage = ctx->octx_dbs;
if (!page_dp) /* config page, check byte ordering */
{
if (-1 == check_configuration (&buf))
return -1;
}
if (!ob_just_report)
buf_disk_raw_write (&buf);
else
log_info ("-----> page %ld", page_dp);
dk_free_box (compr_buf);
return 0;
}
int restore_from_files (const char* prefix)
{
ol_backup_context_t * ctx;
int count = 0;
int volatile hdr_is_read = 0;
dp_addr_t page_dp = 0;
backup_path_init ();
ctx = restore_context_allocate (prefix);
if (!ctx)
{
/* report error */
log_error ("Could not restore database using prefix %s", prefix);
return -1;
}
log_info ("Begin to restore with file prefix %s", ctx->octx_file_prefix);
do
{
again:
hdr_is_read = 0;
CATCH_READ_FAIL (ctx->octx_file)
{
if (read_backup_header (ctx, 0))
{
hdr_is_read = 1;
page_dp = read_long (ctx->octx_file);
}
else
{
log_error ("Unable to read backup file header, %s corrupted", ctx->octx_curr_file);
log_error ("Remove database file created by incomplete recovery");
backup_context_free (ctx);
return -1;
}
}
FAILED
{
if (hdr_is_read == 0)
{
log_error ("Failed to restore from %s file after %ld pages", ctx->octx_curr_file, count);
backup_context_free (ctx);
return -1;
}
else
{
if (backup_context_increment (ctx,1) > 0)
goto again;
goto end;
}
}
END_READ_FAIL (ctx->octx_file);
while (1)
{
if (-1 == insert_page (ctx, page_dp))
{
log_error ("Aborting");
backup_context_free (ctx);
return -1;
}
count++;
CATCH_READ_FAIL (ctx->octx_file)
{
page_dp = read_long (ctx->octx_file);
}
FAILED
{
if (backup_context_increment (ctx,1) > 0)
goto again;
goto end;
}
END_READ_FAIL (ctx->octx_file);
}
} while (backup_context_increment (ctx,1) > 0);
end:
log_info ("End of restoring from backup, %ld pages", count);
backup_context_free (ctx);
return 0;
}
long dbs_count_incbackup_pages (dbe_storage_t * dbs);
static caddr_t
bif_backup_pages (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
return box_num (dbs_count_incbackup_pages (wi_inst.wi_master));
}
static caddr_t
bif_checkpoint_pages (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
int cc = 0;
if (wi_inst.wi_master->dbs_cpt_remap)
{
dk_hash_iterator_t hit;
ptrlong p, r;
for (dk_hash_iterator (&hit, wi_inst.wi_master->dbs_cpt_remap);
dk_hit_next (&hit, (void **) &p, (void **) &r);
/* */)
{
cc++;
}
}
return box_num (cc);
}
static caddr_t
bif_backup_max_dir_size (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
long sz = bif_long_arg (qst, args, 0, "backup_max_dir_size");
ol_max_dir_sz = sz;
return NEW_DB_NULL;
}
static caddr_t
bif_backup_check (caddr_t * qst, caddr_t * err_ret, state_slot_t ** args)
{
dk_set_t paths = 0;
int inx = 0;
caddr_t * patha, prefix;
ob_err_ctx_t e_ctx;
memset (&e_ctx, 0, sizeof (ob_err_ctx_t));
while (inx < BOX_ELEMENTS (args))
bif_string_arg (qst, args, inx++, "backup_check_test");
inx = 0;
prefix = bif_string_arg (qst, args, inx++, "backup_check_test");
while (inx < BOX_ELEMENTS (args))
dk_set_push (&paths, bif_string_arg (qst, args, inx++, "backup_check_test"));
patha = (caddr_t*) list_to_array (dk_set_nreverse (paths));
if (-1 == ob_foreach_dir (patha, prefix, &e_ctx, ob_check_file))
{
dk_free_box ((box_t) patha);
sqlr_new_error ("42000", DIR_CLEARANCE_ERR_CODE, "directory %d contains backup file %s", e_ctx.oc_inx, e_ctx.oc_file);
}
return NEW_DB_NULL;
}
extern int acl_initilized;
extern void init_file_acl();
static void backup_path_init ()
{
dk_set_t b_dirs = 0;
if (!acl_initilized)
init_file_acl();
init_file_acl_set (backup_dirs, &b_dirs);
if (b_dirs) /* +backup-paths xx1,xx2,xx3 */
backup_patha = (caddr_t*) list_to_array (dk_set_nreverse (b_dirs));
else
{
backup_patha = (caddr_t*) dk_alloc_box (sizeof (caddr_t), DV_ARRAY_OF_POINTER);
backup_patha [0] = box_string (".");
}
}
char* backup_sched_get_info =
"create procedure \"BackupSchedInfo\" () {\n"
" for select SE_START, SE_INTERVAL, SE_LAST_COMPLETED, SE_SQL\n"
" from sys_scheduled_event\n"
" where se_name = DB.DBA.BACKUP_SCHED_NAME ()\n"
" do {\n"
" return vector (SE_START, SE_INTERVAL, SE_LAST_COMPLETED, SE_SQL);\n"
" }\n"
" return NULL;\n"
"}";
char * backup_dir_tbl =
"create table DB.DBA.SYS_BACKUP_DIRS ( bd_id integer, \n"
" bd_dir varchar not null, \n"
" primary key (bd_id)) \n";
char * backup_proc0 =
"create procedure DB.DBA.BACKUP_SCHED_NAME ()\n"
"{\n"
" return \'Backup Scheduled Task\';\n"
"}\n";
char * backup_proc1 =
"create procedure DB.DBA.BACKUP_MAKE ( in prefix varchar,\n"
" in max_pages integer,\n"
" in is_full integer) \n"
"{\n"
" if (is_full) \n"
" backup_context_clear();\n"
" declare patha any;\n"
" patha := null;\n"
" for select bd_dir from DB.DBA.SYS_BACKUP_DIRS\n"
" order by bd_id\n"
" do { \n"
" if (patha is null)\n"
" patha := vector (bd_dir);\n"
" else\n"
" patha := vector_concat (patha, vector (bd_dir));\n"
" }\n"
" \n"
" if (patha is null)\n"
" backup_online (prefix, max_pages);\n"
" else\n"
" backup_online (prefix, max_pages, 0, patha);\n"
" if (__proc_exists ('DB.DBA.BACKUP_COMPLETED') is not null)\n"
" DB.DBA.BACKUP_COMPLETED ();\n"
" update DB.DBA.SYS_SCHEDULED_EVENT set\n"
" SE_SQL = sprintf ('DB.DBA.BACKUP_MAKE (\\\'%s\\\', %d, 0)', prefix, max_pages)\n"
" where SE_NAME = DB.DBA.BACKUP_SCHED_NAME ();\n"
"}\n";
void
backup_online_init (void)
{
bif_define ("backup_online", bif_backup_online);
bif_define ("backup_context_clear", bif_backup_context_clear);
bif_define ("backup_context_info_get", bif_backup_context_info_get);
bif_define ("backup_online_header_get", bif_backup_online_header_get);
#ifdef TEST_ERR_REPORT
bif_define ("test_error", bif_test_error );
#endif
#ifdef INC_DEBUG
bif_define ("backup_rep", bif_backup_rep);
#endif
bif_define ("backup_pages", bif_backup_pages);
bif_define ("cpt_remap_pages", bif_checkpoint_pages);
/* test */
bif_define ("backup_check", bif_backup_check);
bif_define ("backup_max_dir_size", bif_backup_max_dir_size);
bif_define ("backup_dirs_clear", bif_backup_dirs_clear);
bif_define ("backup_def_dirs", bif_backup_def_dirs);
bif_define ("backup_report", bif_backup_report);
backup_path_init();
}
void
ddl_obackup_init (void)
{
ddl_std_proc (backup_sched_get_info, 0);
ddl_ensure_table ("DB.DBA.SYS_BACKUP_DIRS", backup_dir_tbl);
ddl_ensure_table ("do this always", backup_proc0);
ddl_ensure_table ("do this always", backup_proc1);
}
caddr_t compressed_buffer (buffer_desc_t* buf)
{
z_stream c_stream; /* compression stream */
int err;
int comprLen = PAGE_SZ;
Byte comp[PAGE_SZ*2];
caddr_t ret_box;
c_stream.zalloc = (alloc_func)0;
c_stream.zfree = (free_func)0;
c_stream.opaque = (voidpf)0;
err = deflateInit(&c_stream, Z_DEFAULT_COMPRESSION);
if (err != Z_OK)
return 0;
c_stream.next_in = (Bytef*)buf->bd_buffer;
c_stream.next_out = &comp[0];
/* while (c_stream.total_in != (uLong)len && c_stream.total_out < comprLen) */
{
c_stream.avail_in = PAGE_SZ;
c_stream.avail_out = comprLen;
err = deflate(&c_stream, Z_NO_FLUSH);
if (err != Z_OK)
return 0;
}
/* Finish the stream, still forcing small buffers: */
for (;;)
{
c_stream.avail_out = 1;
err = deflate(&c_stream, Z_FINISH);
if (err == Z_STREAM_END)
break;
if (err !=Z_OK)
return 0;
}
err = deflateEnd(&c_stream);
if (err != Z_OK)
return 0;
ret_box = dk_alloc_box (c_stream.total_out, DV_BIN);
memcpy (ret_box, comp, c_stream.total_out);
return ret_box;
}
int
uncompress_buffer (caddr_t compr, unsigned char* page_buf)
{
int err;
z_stream d_stream; /* decompression stream */
int compr_len = box_length (compr);
d_stream.zalloc = (alloc_func)0;
d_stream.zfree = (free_func)0;
d_stream.opaque = (voidpf)0;
d_stream.next_in = (Bytef *) compr;
d_stream.avail_in = 0;
d_stream.next_out = page_buf;
err = inflateInit(&d_stream);
if (Z_OK != err)
return err;
/* while (d_stream.total_out <= PAGE_SZ && d_stream.total_in <= compr_len) */
{
d_stream.avail_in = compr_len;
d_stream.avail_out = PAGE_SZ;
err = inflate(&d_stream, Z_NO_FLUSH);
if (err == Z_STREAM_END)
goto cont;
if (err != Z_OK)
return err;
if (d_stream.total_out != PAGE_SZ)
GPF_T1 ("uncompressed buffer is not 8K");
}
cont:
err = inflateEnd(&d_stream);
if (err != Z_OK)
return err;
if (d_stream.total_out != PAGE_SZ)
GPF_T1 ("Page is not recovered properly");
return Z_OK;
}
/* transactions over incset */
static
buffer_desc_t * incset_make_copy (buffer_desc_t * incset_orig_buf)
{
buffer_desc_t * incset_buf = buffer_allocate (~0);
buffer_desc_t * incset_copy = incset_buf;
buffer_desc_t * incset_prev_buf = incset_buf;
memcpy (incset_buf->bd_buffer, incset_orig_buf->bd_buffer, PAGE_SZ);
incset_orig_buf = incset_orig_buf->bd_next;
while (incset_orig_buf)
{
incset_buf = buffer_allocate (~0);
memcpy (incset_buf->bd_buffer, incset_orig_buf->bd_buffer, PAGE_SZ);
incset_prev_buf->bd_next = incset_buf;
incset_prev_buf = incset_buf;
incset_orig_buf = incset_orig_buf->bd_next;
}
incset_buf->bd_next = 0;
return incset_copy;
}
static
void incset_rollback (ol_backup_context_t* ctx)
{
buffer_desc_t * buf = ctx->octx_incset;
buffer_desc_t * incset = wi_inst.wi_master->dbs_incbackup_set;
while (buf)
{
memcpy (incset->bd_buffer + DP_DATA, buf->bd_buffer + DP_DATA, PAGE_DATA_SZ);
incset = incset->bd_next;
buf = buf->bd_next;
}
return;
}
static
void ctx_clear_backup_files (ol_backup_context_t* ctx)
{
DO_SET (caddr_t, file, &ctx->octx_backup_files)
{
int retcode = unlink (file);
if (-1 == retcode)
log_error ("Failed to unlink backup file %s", file);
}
END_DO_SET();
}
long
dbs_count_pageset_items_2 (dbe_storage_t * dbs, buffer_desc_t* pset)
{
dk_hash_t * remaps = hash_table_allocate (dk_set_length (dbs->dbs_cp_remap_pages));
int i_count = 0;
dp_addr_t p_count = 0; /*pages*/
DO_SET (void*, remap, &dbs->dbs_cp_remap_pages)
{
sethash (remap, remaps, (void*) 1);
}
END_DO_SET();
while (pset)
{
size_t sz = PAGE_DATA_SZ;
uint32 * ib_uint = (uint32*) (pset->bd_buffer + DP_DATA);
while (sz)
{
int idx;
for (idx = 0; idx < BITS_IN_LONG; idx++) /* since uint32 is used */
{
/* ignore zero page - it obviously goes to the backup */
if (!p_count++)
continue;
/* cpt_remap pages does not go to backup */
if (gethash ((void*)((ptrlong)p_count - 1), remaps))
continue;
if (p_count - 1 >= dbs->dbs_n_pages)
goto fin;
if (ib_uint[0] & (1 << idx))
{
i_count++;
}
}
ib_uint++;
sz -= sizeof (uint32);
}
pset = pset->bd_next;
}
fin:
hash_table_free (remaps);
return i_count;
}
/* we make (I & B) to receive real page set ready to backup */
long dbs_count_incbackup_pages (dbe_storage_t * dbs)
{
buffer_desc_t * incbps = incset_make_copy (dbs->dbs_incbackup_set);
buffer_desc_t * ib_buf = incbps, * fs_buf = dbs->dbs_free_set;
int n_pages = 0;
long c;
/* printf ("--> %d %d\n", incbps->bd_page, incbps->bd_physical_page); */
while (ib_buf)
{
uint32* ib_uint;
uint32* fs_uint;
size_t sz = PAGE_DATA_SZ;
if (!fs_buf)
break;
ib_uint = (uint32*) (ib_buf->bd_buffer + DP_DATA);
fs_uint = (uint32*) (fs_buf->bd_buffer + DP_DATA);
while (sz)
{
if (*ib_uint & ~*fs_uint)
log_error (
"There are pages in the backup set that are actually free. "
"Should do backup_context_clear () and thus get a full backup. "
"This can indicate corruption around page %ld.",
(long) (n_pages * 8L * PAGE_DATA_SZ + (PAGE_DATA_SZ - sz) * 8));
ib_uint[0] &= fs_uint[0];
ib_uint++;
fs_uint++;
sz -= sizeof (uint32);
}
ib_buf = ib_buf->bd_next;
fs_buf = fs_buf->bd_next;
n_pages++;
}
c = dbs_count_pageset_items_2 (dbs, incbps);
buffer_set_free (incbps);
return c;
}
#ifndef HAVE_DIRECT_H
#define DIRNAME(de) de->d_name
#define CHECKFH(df) (df != NULL)
#else
#define DIRNAME(de) de->name
#define CHECKFH(df) (df != -1)
#define S_IFLNK S_IFREG
#endif
caddr_t * ob_file_list (char * fname)
{
long files = 1;
dk_set_t dir_list = NULL;
#ifndef HAVE_DIRECT_H
DIR *df = 0;
struct dirent *de;
#else
char *fname_tail;
ptrlong df = 0, rc = 0;
struct _finddata_t fd, *de;
#endif
char path[PATH_MAX + 1];
STAT_T st;
caddr_t lst;
#ifndef HAVE_DIRECT_H
if (!is_allowed (fname))
sqlr_new_error ("42000", "FA016",
"Access to %s is denied due to access control in ini file", fname);
df = opendir (fname);
#else
if ((strlen (fname) + 3) >= PATH_MAX)
sqlr_new_error ("39000", "FA017", "Path string is too long.");
strcpy_ck (path, fname);
for (fname_tail = path; fname_tail[0]; fname_tail++)
{
if ('/' == fname_tail[0])
fname_tail[0] = '\\';
}
if (fname_tail > path && fname_tail[-1] != '\\')
*(fname_tail++) = '\\';
*(fname_tail++) = '*';
fname_tail[0] = '\0';
if (!is_allowed (path))
sqlr_new_error ("42000", "FA018",
"Access to %s is denied due to access control in ini file", path);
df = _findfirst (path, &fd);
#endif
if (CHECKFH (df))
{
do
{
#ifndef HAVE_DIRECT_H
de = readdir (df);
#else
de = NULL;
if (rc == 0)
de = &fd;
#endif
if (de)
{
if (strlen (fname) + strlen (DIRNAME (de)) + 1 < PATH_MAX)
{
snprintf (path, sizeof (path), "%s/%s", fname, DIRNAME (de));
V_STAT (path, &st);
if (((st.st_mode & S_IFMT) == S_IFDIR) && files == 0)
dk_set_push (&dir_list,
box_dv_short_string (DIRNAME (de)));
else if (((st.st_mode & S_IFMT) == S_IFREG) && files == 1)
dk_set_push (&dir_list,
box_dv_short_string (DIRNAME (de)));
#ifndef WIN32
else if (((st.st_mode & S_IFMT) == S_IFLNK) && files == 2)
dk_set_push (&dir_list,
box_dv_short_string (DIRNAME (de)));
#endif
else if (((st.st_mode & S_IFMT) != 0) && files == 3)
dk_set_push (&dir_list,
box_dv_short_string (DIRNAME (de)));
}
else
{
/* This bug is possible only in UNIXes, because it requires the use of links,
but WIN32 case added too, due to paranoia. */
#ifndef HAVE_DIRECT_H
closedir (df);
#else
_findclose (df);
#endif
sqlr_new_error ("39000", "FA019",
"Path string is too long.");
}
}
#ifdef HAVE_DIRECT_H
rc = _findnext (df, &fd);
#endif
}
while (de);
#ifndef HAVE_DIRECT_H
closedir (df);
#else
_findclose (df);
#endif
}
else
{
sqlr_new_error ("39000", "FA020", "%s", strerror (errno));
}
lst = list_to_array (dk_set_nreverse (dir_list));
return (caddr_t*) lst;
}
static
int ob_get_num_from_file (caddr_t file, caddr_t prefix)
{
if (!strncmp (file, prefix, strlen (prefix)))
{
char * pp = file+strlen(prefix);
int postfix_check=0, digit_check=0;
while (pp[0])
{
if (isdigit (pp[0]) && ++digit_check && ++pp)
continue;
else
{
if ((!strcmp(pp, ".bp")) && ++postfix_check)
break;
else
return 0;
}
}
if (postfix_check && digit_check && (atoi (file+strlen(prefix)) > 0))
return atoi (file+strlen(prefix));
}
return -1;
}
static
int ob_check_file (caddr_t elt, caddr_t ctx, caddr_t dir)
{
caddr_t prefix = ctx;
int num = 0;
if (bp_ctx.db_bp_ts)
{
num = bp_ctx.db_bp_num;
prefix = bp_ctx.db_bp_prfx;
}
if (ob_get_num_from_file (elt, prefix) > num)
return -1;
return 0;
}
static
int ob_foreach_file (caddr_t dir, caddr_t ctx, ob_err_ctx_t* e_ctx, file_check_f func)
{
int inx;
caddr_t * files = ob_file_list (dir);
DO_BOX (caddr_t, elt, inx, files)
{
if (-1 == (func)(elt, ctx, dir))
{
strncpy (e_ctx->oc_file, elt, FILEN_BUFSIZ);
dk_free_tree ((box_t) files);
return -1;
}
}
END_DO_BOX;
dk_free_tree ((box_t) files);
return 0;
}
static
int ob_foreach_dir (caddr_t * dirs, caddr_t ctx, ob_err_ctx_t* e_ctx, file_check_f func)
{
int inx;
DO_BOX (caddr_t, elt, inx, dirs)
{
if (0 > ob_foreach_file (elt, ctx, e_ctx, func))
{
e_ctx->oc_inx = inx;
return -1;
}
}
END_DO_BOX;
return 0;
}
| {
"pile_set_name": "Github"
} |
/**
* Copyright (C) 2013-2016 The Rythm Engine project
* for LICENSE and other details see:
* https://github.com/rythmengine/rythmengine
*/
package org.rythmengine.essential;
import org.rythmengine.TestBase;
import org.junit.Test;
/**
* Test scripting block parser
*/
public class ScriptBlockParser extends TestBase {
@Test
public void test() {
t = "abc\n@{\n\tint i = 0;\n\tint j = 1;\n}\ni + j = @(i + j)";
s = r(t);
assertEquals("abc\ni + j = 1", s);
}
@Test
public void testInline() {
t = "abc@{\n\tint i = 0;\n\tint j = 1;\n}i + j = @(i + j)";
eq("abci + j = 1");
}
@Test
public void testHalfInline() {
t = "abc@{\n\tint i = 0;\n\tint j = 1;\n}\ni + j = @(i + j)";
eq("abc\ni + j = 1");
}
@Test
public void testHalfInline2() {
// this one won't work due to Rythm limit. Fix me!
// t = "abc\n@{\n\tint i = 0;\n\tint j = 1;\n}i + j = @(i + j)";
// eq("abc\ni + j = 1");
}
}
| {
"pile_set_name": "Github"
} |
/*++
Copyright (c) 1990-1993 Microsoft Corporation
Module Name:
plotui.h
Abstract:
This module contains all plotters's user interface common defines
Author:
02-Dec-1993 Thu 09:56:07 created -by- Daniel Chou (danielc)
[Environment:]
GDI Device Driver - Plotter.
[Notes:]
Revision History:
--*/
#ifndef _PLOTUI_
#define _PLOTUI_
//
// For compilers that don't support nameless unions
//
#ifndef DUMMYUNIONNAME
#ifdef NONAMELESSUNION
#define DUMMYUNIONNAME u
#define DUMMYUNIONNAME2 u2
#define DUMMYUNIONNAME3 u3
#else
#define DUMMYUNIONNAME
#define DUMMYUNIONNAME2
#define DUMMYUNIONNAME3
#endif
#endif
//
// PrinterINFO data structure which used by following calls to map a hPrinter
// to this data structrue by follwoing funciton
//
// 1. DrvDeviceCapabilities()
// 2. DrvDocumentProperties()
// 3. AdvanceDocumentProperties()
// 4. PrinterProperties()
//
#define PIF_UPDATE_PERMISSION 0x01
#define PIF_DOCPROP 0x02
typedef struct _PRINTERINFO {
HANDLE hPrinter; // Handle to the printer belong to here
POPTITEM pOptItem;
LPWSTR pHelpFile; // pointer to the help file
PFORM_INFO_1 pFI1Base; // intalled form
PPLOTGPC pPlotGPC; // loaded/updated Plotter GPC data
WORD cOptItem;
BYTE Flags;
BYTE IdxPenSet; // plotter pen data set
DWORD dmErrBits; // ErrorBits for DM_
PLOTDEVMODE PlotDM; // Validated PLOTDEVMODE
PAPERINFO CurPaper; // Current loaded form on the device
PPDATA PPData; // Printer Prop Data
HANDLE hCPSUI; // handle to the common ui pages
PCOMPROPSHEETUI pCPSUI; // pointer to COMPROPSHEETUI
PPLOTDEVMODE pPlotDMIn; // input devmode
PPLOTDEVMODE pPlotDMOut; // output devmode
DWORD ExtraData; // starting of extra data
} PRINTERINFO, *PPRINTERINFO;
#define PI_PADJHTINFO(pPI) (PDEVHTINFO)&((pPI)->ExtraData)
#define PI_PDEVHTADJDATA(pPI) (PDEVHTADJDATA)(PI_PADJHTINFO(pPI) + 1)
#define PI_PPENDATA(pPI) (PPENDATA)&((pPI)->ExtraData)
typedef struct _DOCPROPINFO {
HWND hWnd;
DWORD Result;
DOCUMENTPROPERTYHEADER DPHdr;
} DOCPROPINFO, *PDOCPROPINFO;
typedef struct _DEVPROPINFO {
HWND hWnd;
DWORD Result;
DEVICEPROPERTYHEADER DPHdr;
} DEVPROPINFO, *PDEVPROPINFO;
typedef UINT (* _CREATEOIFUNC)(PPRINTERINFO pPI,
POPTITEM pOptItem,
LPVOID pOIData);
#define CREATEOIFUNC UINT
typedef struct _OPDATA {
WORD Flags;
WORD IDSName;
WORD IconID;
union {
WORD Style;
WORD IDSSeparator;
} DUMMYUNIONNAME;
union {
WORD wParam;
WORD IDSCheckedName;
} DUMMYUNIONNAME2;
SHORT sParam;
} OPDATA, *POPDATA;
#define ODF_PEN 0x00000001
#define ODF_RASTER 0x00000002
#define ODF_PEN_RASTER (ODF_PEN | ODF_RASTER)
#define ODF_COLOR 0x00000004
#define ODF_ROLLFEED 0x00000008
#define ODF_ECB 0x00000010
#define ODF_INC_IDSNAME 0x00000020
#define ODF_INC_ICONID 0x00000040
#define ODF_NO_INC_POPDATA 0x00000080
#define ODF_COLLAPSE 0x00000100
#define ODF_CALLBACK 0x00000200
#define ODF_NO_PAPERTRAY 0x00000400
#define ODF_CALLCREATEOI 0x00000800
#define ODF_MANUAL_FEED 0x00001000
#define OI_LEVEL_1 0
#define OI_LEVEL_2 1
#define OI_LEVEL_3 2
#define OI_LEVEL_4 3
#define OI_LEVEL_5 4
#define OI_LEVEL_6 5
typedef struct _OIDATA {
DWORD Flags;
BYTE NotUsed;
BYTE Level;
BYTE DMPubID;
BYTE Type;
WORD IDSName;
union {
WORD IconID;
WORD Style;
} DUMMYUNIONNAME;
WORD HelpIdx;
WORD cOPData;
union {
POPDATA pOPData;
_CREATEOIFUNC pfnCreateOI;
} DUMMYUNIONNAME2;
} OIDATA, *POIDATA;
#define PI_OFF(x) (WORD)offsetof(PRINTERINFO, x)
#define PLOTDM_OFF(x) (WORD)offsetof(PLOTDEVMODE, x)
#define OPTIF_NONE 0
#define PP_FORMTRAY_ASSIGN (DMPUB_USER + 0)
#define PP_INSTALLED_FORM (DMPUB_USER + 1)
#define PP_MANUAL_FEED_METHOD (DMPUB_USER + 2)
#define PP_PRINT_FORM_OPTIONS (DMPUB_USER + 3)
#define PP_AUTO_ROTATE (DMPUB_USER + 4)
#define PP_PRINT_SMALLER_PAPER (DMPUB_USER + 5)
#define PP_HT_SETUP (DMPUB_USER + 6)
#define PP_INSTALLED_PENSET (DMPUB_USER + 7)
#define PP_PEN_SETUP (DMPUB_USER + 8)
#define PP_PENSET (DMPUB_USER + 9)
#define PP_PEN_NUM (DMPUB_USER + 10)
#define DP_HTCLRADJ (DMPUB_USER + 0)
#define DP_FILL_TRUETYPE (DMPUB_USER + 1)
#define DP_QUICK_POSTER_MODE (DMPUB_USER + 2)
//
// Icon ID
//
#define IDI_RASTER_ROLLFEED 64089
#define IDI_RASTER_TRAYFEED 64090
#define IDI_PEN_ROLLFEED 64087
#define IDI_PEN_TRAYFEED 64088
#define IDI_ROLLPAPER 64091
#define IDI_PEN_SETUP 64093
#define IDI_PENSET 64092
#define IDI_DEFAULT_PENCLR 1007
#define IDI_PENCLR 64092
#define IDI_AUTO_ROTATE_NO 1009
#define IDI_AUTO_ROTATE_YES 1010
#define IDI_PRINT_SMALLER_PAPER_NO 1011
#define IDI_PRINT_SMALLER_PAPER_YES 1012
#define IDI_MANUAL_CX 1013
#define IDI_MANUAL_CY 1014
#define IDI_FILL_TRUETYPE_NO 1015
#define IDI_FILL_TRUETYPE_YES 1016
#define IDI_COLOR_FIRST IDI_WHITE
#define IDI_WHITE 1100
#define IDI_BLACK 1101
#define IDI_RED 1102
#define IDI_GREEN 1103
#define IDI_YELLOW 1104
#define IDI_BLUE 1105
#define IDI_MAGENTA 1106
#define IDI_CYAN 1107
#define IDI_ORANGE 1108
#define IDI_BROWN 1109
#define IDI_VIOLET 1110
#define IDI_COLOR_LAST IDI_VIOLET
//
// String table ID
//
#define IDS_PLOTTER_DRIVER 1900
#define IDS_CAUTION 1901
#define IDS_NO_MEMORY 1902
#define IDS_INVALID_DATA 1903
#define IDS_FORM_TOO_BIG 1904
#define IDS_INV_DMSIZE 1905
#define IDS_INV_DMVERSION 1906
#define IDS_INV_DMDRIVEREXTRA 1907
#define IDS_INV_DMCOLOR 1908
#define IDS_INV_DMCOPIES 1909
#define IDS_INV_DMSCALE 1910
#define IDS_INV_DMORIENTATION 1911
#define IDS_INV_DMFORM 1912
#define IDS_INV_DMQUALITY 1913
#define IDS_FORM_NOT_AVAI 1914
#define IDS_MODEL 1915
#define IDS_HELP_FILENAME 1916
#define IDS_NO_HELP 1918
#define IDS_PP_NO_SAVE 1919
#define IDS_INSTALLED_FORM 2030
#define IDS_MANUAL_FEEDER 2040
#define IDS_MANUAL_FEED_METHOD 2041
#define IDS_MANUAL_CX 2042
#define IDS_MANUAL_CY 2043
#define IDS_ROLLFEED 2044
#define IDS_MAINFEED 2045
#define IDS_PRINT_FORM_OPTIONS 2050
#define IDS_AUTO_ROTATE 2051
#define IDS_PRINT_SAMLLER_PAPER 2052
#define IDS_INSTALLED_PENSET 2060
#define IDS_PEN_SETUP 2061
#define IDS_PENSET_FIRST IDS_PENSET_1
#define IDS_PENSET_1 2070
#define IDS_PENSET_2 2071
#define IDS_PENSET_3 2072
#define IDS_PENSET_4 2073
#define IDS_PENSET_5 2074
#define IDS_PENSET_6 2075
#define IDS_PENSET_7 2076
#define IDS_PENSET_8 2077
#define IDS_PENSET_LAST IDS_PENSET_8
#define IDS_PEN_NUM 2100
#define IDS_DEFAULT_PENCLR 2101
#define IDS_QUALITY_FIRST IDS_QUALITY_DRAFT
#define IDS_QUALITY_DRAFT 2110
#define IDS_QUALITY_LOW 2111
#define IDS_QUALITY_MEDIUM 2112
#define IDS_QUALITY_HIGH 2113
#define IDS_QUALITY_LAST IDS_QUALITY_HIGH
#define IDS_COLOR_FIRST IDS_WHITE
#define IDS_WHITE 2120
#define IDS_BLACK 2121
#define IDS_RED 2122
#define IDS_GREEN 2123
#define IDS_YELLOW 2124
#define IDS_BLUE 2125
#define IDS_MAGENTA 2126
#define IDS_CYAN 2127
#define IDS_ORANGE 2128
#define IDS_BROWN 2129
#define IDS_VIOLET 2130
#define IDS_COLOR_LAST IDS_VIOLET
#define IDS_FILL_TRUETYPE 2140
#define IDS_POSTER_MODE 2150
#define IDS_USERFORM 2200
//
// Help Index for Printer Properties
//
#define IDH_FORMTRAYASSIGN 5000
#define IDH_FORM_ROLL_FEEDER 5010
#define IDH_FORM_MAIN_FEEDER 5020
#define IDH_FORM_MANUAL_FEEDER 5030
#define IDH_MANUAL_FEED_METHOD 5040
#define IDH_PRINT_FORM_OPTIONS 5050
#define IDH_AUTO_ROTATE 5060
#define IDH_PRINT_SMALLER_PAPER 5070
#define IDH_HALFTONE_SETUP 5080
#define IDH_INSTALLED_PENSET 5090
#define IDH_PEN_SETUP 5100
#define IDH_PENSET 5110
#define IDH_PEN_NUM 5120
//
// Help Index for Document Properties
#define IDH_FORMNAME 5500
#define IDH_ORIENTATION 5510
#define IDH_COPIES_COLLATE 5520
#define IDH_PRINTQUALITY 5530
#define IDH_COLOR 5540
#define IDH_SCALE 5550
#define IDH_HTCLRADJ 5560
#define IDH_FILL_TRUETYPE 5570
#define IDH_POSTER_MODE 5580
#endif // _PLOTUI_
| {
"pile_set_name": "Github"
} |
.editbox {
margin: .4em;
padding: 0;
font-family: monospace;
font-size: 10pt;
color: black;
}
.editbox p {
margin: 0;
}
span.sp-keyword {
color: #708;
}
span.sp-prefixed {
color: #5d1;
}
span.sp-var {
color: #00c;
}
span.sp-comment {
color: #a70;
}
span.sp-literal {
color: #a22;
}
span.sp-uri {
color: #292;
}
span.sp-operator {
color: #088;
}
| {
"pile_set_name": "Github"
} |
.app {
padding: 0;
display: flex;
flex-direction: row;
padding: 16px;
}
.x6-graph {
flex: 1;
box-shadow: 0 0 10px 1px #e9e9e9;
}
| {
"pile_set_name": "Github"
} |
# Update the desktop database:
if [ -x usr/bin/update-desktop-database ]; then
chroot . /usr/bin/update-desktop-database /usr/share/applications > /dev/null 2>&1
fi
| {
"pile_set_name": "Github"
} |
# -*- coding: iso-8859-1 -*-
""" crypto.cipher.tkip_encr
TKIP encryption from IEEE 802.11 TGi
TKIP uses WEP (ARC4 with crc32) and key mixing
This is only the encryption and not the Michael integrity check!
Copyright © (c) 2002 by Paul A. Lambert
Read LICENSE.txt for license information.
November 2002
"""
from crypto.cipher.arc4 import ARC4
from zlib import crc32
from struct import pack
from crypto.keyedHash.tkip_key_mixing import TKIP_Mixer
from crypto.errors import BadKeySizeError, IntegrityCheckError
from binascii_plus import *
class TKIP_encr:
""" TKIP Stream Cipher Algorithm without the Michael integrity check
TKIP encryption on an MPDU using WEP with a longer 'iv'
and the TKIP key mixing algorithm . This does NOT include
the Michael integrity algorithm that operates on the MSDU data.
"""
def __init__(self, key=None, transmitterAddress=None, keyID=None):
""" Initialize TKIP_encr, key -> octet string for key """
assert(keyID == 0 or keyID == None), 'keyID should be zero in TKIP'
self.keyId = 0
self.name = 'TKIP_encr'
self.strength = 128
self.encryptHeaderSize = 8 # used to skip octets on decrypt
self.arc4 = ARC4() # base algorithm
self.keyMixer = TKIP_Mixer(key, transmitterAddress)
if key != None: # normally in base init, uses adjusted keySize
self.setKey(key)
if transmitterAddress != None:
self.setTA(transmitterAddress)
def setKey(self, key, ta=None):
""" Set key, key string is 16 octets long """
self.keyMixer.setKey(key)
if ta != None:
self.setTA(ta)
def setTA(self, transmitterAddress):
""" Set the transmitter address """
self.keyMixer.setTA(transmitterAddress)
def _getIVandKeyID(self, cipherText):
""" Parse the TKIP header to get iv and set KeyID
iv is returned as octet string and is little-endian!!!
"""
assert(ord(cipherText[3]) & 0x20), 'extIV SHOULD be set in TKIP header'
self.setCurrentKeyID = (ord(cipherText[3]) & 0xC0) >> 6
return cipherText[:3] + cipherText[5:9] # note iv octets are little-endian!!!
def _makeARC4key(self, tscOctets, keyID=0):
""" Make an ARC4 key from TKIP Sequence Counter Octets (little-endian) """
if keyID != 0 :
raise 'TKIP expects keyID of zero'
print "tscOctets in tkmixer=", b2a_p(tscOctets)
newKey = self.keyMixer.newKey(tscOctets)
print "newKey=", b2a_p(newKey)
return newKey
def encrypt(self, plainText, iv):
""" Encrypt a string and return a binary string
iv is 6 octets of little-endian encoded pn
"""
assert(len(iv) == 6), 'TKIP bad IV size on encryption'
self.pnField = iv
self.arc4.setKey(self._makeARC4key(iv))
eh1 = chr((ord(iv[0]) | 0x20) & 0x7f)
encryptionHeader = iv[0] + eh1 + iv[1] + chr((self.keyId << 6) | 0x20) + iv[2:]
crc = pack('<I', crc32(plainText))
cipherText = encryptionHeader + self.arc4.encrypt(plainText + crc)
return cipherText
def decrypt(self, cipherText):
""" Decrypt a WEP packet, assumes WEP 4 byte header on packet """
assert(ord(cipherText[3]) & 0x20), 'extIV SHOULD be set in TKIP header'
self.setCurrentKeyID = (ord(cipherText[3]) & 0xC0) >> 6
iv = cipherText[0] + cipherText[2] + cipherText[4:8]
self.pnField = iv
self.arc4.setKey(self._makeARC4key(iv))
plainText = self.arc4.decrypt(cipherText[self.encryptHeaderSize:])
if plainText[-4:] != pack('<I', crc32(plainText[:-4])): # check data integrity
raise (IntegrityCheckError, 'WEP CRC Integrity Check Error')
return plainText[:-4]
| {
"pile_set_name": "Github"
} |
// g2o - General Graph Optimization
// Copyright (C) 2011 R. Kuemmerle, G. Grisetti, W. Burgard
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
// TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
// PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
// TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef G2O_PROPERTY_H_
#define G2O_PROPERTY_H_
#include <string>
#include <map>
#include <sstream>
#include "string_tools.h"
namespace g2o {
class BaseProperty {
public:
BaseProperty(const std::string name_);
virtual ~BaseProperty();
const std::string& name() {return _name;}
virtual std::string toString() const = 0;
virtual bool fromString(const std::string& s) = 0;
protected:
std::string _name;
};
template <typename T>
class Property: public BaseProperty {
public:
typedef T ValueType;
Property(const std::string& name_): BaseProperty(name_){}
Property(const std::string& name_, const T& v): BaseProperty(name_), _value(v){}
void setValue(const T& v) {_value = v; }
const T& value() const {return _value;}
virtual std::string toString() const
{
std::stringstream sstr;
sstr << _value;
return sstr.str();
}
virtual bool fromString(const std::string& s)
{
bool status = convertString(s, _value);
return status;
}
protected:
T _value;
};
/**
* \brief a collection of properties mapping from name to the property itself
*/
class PropertyMap : protected std::map<std::string, BaseProperty*>
{
public:
typedef std::map<std::string, BaseProperty*> BaseClass;
typedef BaseClass::iterator PropertyMapIterator;
typedef BaseClass::const_iterator PropertyMapConstIterator;
~PropertyMap();
/**
* add a property to the map
*/
bool addProperty(BaseProperty* p);
/**
* remove a property from the map
*/
bool eraseProperty(const std::string& name_);
/**
* return a property by its name
*/
template <typename P>
P* getProperty(const std::string& name_)
{
PropertyMapIterator it=find(name_);
if (it==end())
return 0;
return dynamic_cast<P*>(it->second);
}
template <typename P>
const P* getProperty(const std::string& name_) const
{
PropertyMapConstIterator it=find(name_);
if (it==end())
return 0;
return dynamic_cast<P*>(it->second);
}
/**
* create a property and insert it
*/
template <typename P>
P* makeProperty(const std::string& name_, const typename P::ValueType& v)
{
PropertyMapIterator it=find(name_);
if (it==end()){
P* p=new P(name_, v);
addProperty(p);
return p;
} else
return dynamic_cast<P*>(it->second);
}
/**
* update a specfic property with a new value
* @return true if the params is stored and update was carried out
*/
bool updatePropertyFromString(const std::string& name, const std::string& value);
/**
* update the map based on a name=value string, e.g., name1=val1,name2=val2
* @return true, if it was possible to update all parameters
*/
bool updateMapFromString(const std::string& values);
void writeToCSV(std::ostream& os) const;
using BaseClass::size;
using BaseClass::begin;
using BaseClass::end;
using BaseClass::iterator;
using BaseClass::const_iterator;
};
typedef Property<int> IntProperty;
typedef Property<bool> BoolProperty;
typedef Property<float> FloatProperty;
typedef Property<double> DoubleProperty;
typedef Property<std::string> StringProperty;
} // end namespace
#endif
| {
"pile_set_name": "Github"
} |
var edmapUrl = 'https://edmaps.rcsb.org/maps/'
stage.setParameters({
cameraType: 'orthographic',
mousePreset: 'coot'
})
function addElement (el) {
Object.assign(el.style, {
position: 'absolute',
zIndex: 10
})
stage.viewer.container.appendChild(el)
}
function createElement (name, properties, style) {
var el = document.createElement(name)
Object.assign(el, properties)
Object.assign(el.style, style)
return el
}
function createSelect (options, properties, style) {
var select = createElement('select', properties, style)
options.forEach(function (d) {
select.add(createElement('option', {
value: d[ 0 ], text: d[ 1 ]
}))
})
return select
}
function createFileButton (label, properties, style) {
var input = createElement('input', Object.assign({
type: 'file'
}, properties), { display: 'none' })
addElement(input)
var button = createElement('input', {
value: label,
type: 'button',
onclick: function () { input.click() }
}, style)
return button
}
var scroll2fofc, scrollFofc
function isolevelScroll (stage, delta) {
var d = Math.sign(delta) / 10
stage.eachRepresentation(function (reprElem, comp) {
var p
if (scroll2fofc && reprElem === surf2fofc) {
p = reprElem.getParameters()
reprElem.setParameters({ isolevel: Math.max(0.01, p.isolevel + d) })
} else if (scrollFofc && (reprElem === surfFofc || reprElem === surfFofcNeg)) {
p = reprElem.getParameters()
reprElem.setParameters({ isolevel: Math.max(0.01, p.isolevel + d) })
}
})
}
stage.mouseControls.add('scroll', isolevelScroll)
var struc
function loadStructure (input) {
struc = undefined
surf2fofc = undefined
surfFofc = undefined
surfFofcNeg = undefined
file2fofcText.innerText = '2fofc file: none'
fileFofcText.innerText = 'fofc file: none'
isolevel2fofcText.innerText = ''
isolevelFofcText.innerText = ''
boxSizeRange.value = 10
seleInput.value = ''
stage.setFocus(0)
stage.removeAllComponents()
return stage.loadFile(input).then(function (o) {
fileStructureText.innerText = 'structure file: ' + o.name
struc = o
o.autoView()
o.addRepresentation('line', {
colorValue: 'yellow',
multipleBond: 'offset',
bondSpacing: 1.1,
linewidth: 6
})
o.addRepresentation('point', {
colorValue: 'yellow',
sizeAttenuation: false,
pointSize: 6,
alphaTest: 1,
useTexture: true
})
})
}
var surf2fofc
function load2fofc (input) {
return stage.loadFile(input).then(function (o) {
file2fofcText.innerText = '2fofc file: ' + o.name
isolevel2fofcText.innerText = '2fofc level: 1.5\u03C3'
boxSizeRange.value = 10
scrollSelect.value = '2fofc'
scroll2fofc = true
if (surfFofc) {
isolevelFofcText.innerText = 'fofc level: 3.0\u03C3'
surfFofc.setParameters({ isolevel: 3, boxSize: 10, contour: true, isolevelScroll: false })
surfFofcNeg.setParameters({ isolevel: 3, boxSize: 10, contour: true, isolevelScroll: false })
}
surf2fofc = o.addRepresentation('surface', {
color: 'skyblue',
isolevel: 1.5,
boxSize: 10,
useWorker: false,
contour: true,
opaqueBack: false,
isolevelScroll: false
})
})
}
var surfFofc, surfFofcNeg
function loadFofc (input) {
return stage.loadFile(input).then(function (o) {
fileFofcText.innerText = 'fofc file: ' + o.name
isolevelFofcText.innerText = 'fofc level: 3.0\u03C3'
boxSizeRange.value = 10
scrollSelect.value = '2fofc'
scrollFofc = false
if (surf2fofc) {
isolevel2fofcText.innerText = '2fofc level: 1.5\u03C3'
surf2fofc.setParameters({ isolevel: 1.5, boxSize: 10, contour: true, isolevelScroll: false })
}
surfFofc = o.addRepresentation('surface', {
color: 'mediumseagreen',
isolevel: 3,
boxSize: 10,
useWorker: false,
contour: true,
opaqueBack: false,
isolevelScroll: false
})
surfFofcNeg = o.addRepresentation('surface', {
color: 'tomato',
isolevel: 3,
negateIsolevel: true,
boxSize: 10,
useWorker: false,
contour: true,
opaqueBack: false,
isolevelScroll: false
})
})
}
var loadStructureButton = createFileButton('load structure', {
accept: '.pdb,.cif,.ent,.gz',
onchange: function (e) {
if (e.target.files[ 0 ]) {
exampleSelect.value = ''
loadStructure(e.target.files[ 0 ])
}
}
}, { top: '12px', left: '12px' })
addElement(loadStructureButton)
var load2fofcButton = createFileButton('load 2fofc', {
accept: '.map,.ccp4,.brix,.dsn6,.mrc,.gz',
onchange: function (e) {
if (e.target.files[ 0 ]) {
load2fofc(e.target.files[ 0 ])
}
}
}, { top: '36px', left: '12px' })
addElement(load2fofcButton)
var loadFofcButton = createFileButton('load fofc', {
accept: '.map,.ccp4,.brix,.dsn6,.mrc,.gz',
onchange: function (e) {
if (e.target.files[ 0 ]) {
loadFofc(e.target.files[ 0 ])
}
}
}, { top: '60px', left: '12px' })
addElement(loadFofcButton)
var exampleSelect = createSelect([
[ '', 'load example' ],
[ '3ek3', '3ek3' ],
[ '3nzd', '3nzd' ],
[ '1lee', '1lee' ]
], {
onchange: function (e) {
var id = e.target.value
loadExample(id).then(function () {
if (id === '3nzd') {
seleInput.value = 'NDP'
} else if (id === '1lee') {
seleInput.value = 'R36 and (.C28 or .N1)'
}
applySele(seleInput.value)
})
}
}, { top: '84px', left: '12px' })
addElement(exampleSelect)
var seleText = createElement('span', {
innerText: 'center selection',
title: 'press enter to apply and center'
}, { top: '114px', left: '12px', color: 'lightgrey' })
addElement(seleText)
var lastSele
function checkSele (str) {
var selection = new NGL.Selection(str)
return !selection.selection[ 'error' ]
}
function applySele (value) {
if (value) {
lastSele = value
struc.autoView(value)
var z = stage.viewer.camera.position.z
stage.setFocus(100 - Math.abs(z / 10))
}
}
var seleInput = createElement('input', {
type: 'text',
title: 'press enter to apply and center',
onkeypress: function (e) {
var value = e.target.value
var character = String.fromCharCode(e.which)
if (e.keyCode === 13) {
e.preventDefault()
if (checkSele(value)) {
if (struc) {
applySele(value)
}
e.target.style.backgroundColor = 'white'
} else {
e.target.style.backgroundColor = 'tomato'
}
} else if (lastSele !== value + character) {
e.target.style.backgroundColor = 'skyblue'
} else {
e.target.style.backgroundColor = 'white'
}
}
}, { top: '134px', left: '12px', width: '120px' })
addElement(seleInput)
var surfaceSelect = createSelect([
[ 'contour', 'contour' ],
[ 'wireframe', 'wireframe' ],
[ 'smooth', 'smooth' ],
[ 'flat', 'flat' ]
], {
onchange: function (e) {
var v = e.target.value
var p
if (v === 'contour') {
p = {
contour: true,
flatShaded: false,
opacity: 1,
metalness: 0,
wireframe: false
}
} else if (v === 'wireframe') {
p = {
contour: false,
flatShaded: false,
opacity: 1,
metalness: 0,
wireframe: true
}
} else if (v === 'smooth') {
p = {
contour: false,
flatShaded: false,
opacity: 0.5,
metalness: 0,
wireframe: false
}
} else if (v === 'flat') {
p = {
contour: false,
flatShaded: true,
opacity: 0.5,
metalness: 0.2,
wireframe: false
}
}
stage.getRepresentationsByName('surface').setParameters(p)
}
}, { top: '170px', left: '12px' })
addElement(surfaceSelect)
var toggle2fofcButton = createElement('input', {
type: 'button',
value: 'toggle 2fofc',
onclick: function (e) {
surf2fofc.toggleVisibility()
}
}, { top: '194px', left: '12px' })
addElement(toggle2fofcButton)
var toggleFofcButton = createElement('input', {
type: 'button',
value: 'toggle fofc',
onclick: function (e) {
surfFofc.toggleVisibility()
surfFofcNeg.toggleVisibility()
}
}, { top: '218px', left: '12px' })
addElement(toggleFofcButton)
addElement(createElement('span', {
innerText: 'box size'
}, { top: '242px', left: '12px', color: 'lightgrey' }))
var boxSizeRange = createElement('input', {
type: 'range',
value: 10,
min: 1,
max: 50,
step: 1,
oninput: function (e) {
stage.getRepresentationsByName('surface').setParameters({
boxSize: parseInt(e.target.value)
})
}
}, { top: '258px', left: '12px' })
addElement(boxSizeRange)
var screenshotButton = createElement('input', {
type: 'button',
value: 'screenshot',
onclick: function () {
stage.makeImage({
factor: 1,
antialias: false,
trim: false,
transparent: false
}).then(function (blob) {
NGL.download(blob, 'ngl-xray-viewer-screenshot.png')
})
}
}, { top: '282px', left: '12px' })
addElement(screenshotButton)
var scrollSelect = createSelect([
[ '2fofc', 'scroll 2fofc' ],
[ 'fofc', 'scroll fofc' ],
[ 'both', 'scroll both' ]
], {
onchange: function (e) {
var v = e.target.value
if (v === '2fofc') {
scroll2fofc = true
scrollFofc = false
} else if (v === 'fofc') {
scroll2fofc = false
scrollFofc = true
} else if (v === 'both') {
scroll2fofc = true
scrollFofc = true
}
}
}, { top: '306px', left: '12px' })
addElement(scrollSelect)
var loadEdmapText = createElement('span', {
innerText: 'load edmap for pdb id',
title: 'press enter to load'
}, { top: '330px', left: '12px', color: 'lightgrey' })
addElement(loadEdmapText)
var loadEdmapInput = createElement('input', {
type: 'text',
title: 'press enter to load',
onkeypress: function (e) {
var value = e.target.value
if (e.keyCode === 13) {
e.preventDefault()
loadStructure('rcsb://' + value)
load2fofc(edmapUrl + value + '_2fofc.dsn6')
loadFofc(edmapUrl + value + '_fofc.dsn6')
}
}
}, { top: '350px', left: '12px', width: '120px' })
addElement(loadEdmapInput)
var isolevel2fofcText = createElement(
'span', {}, { bottom: '32px', left: '12px', color: 'lightgrey' }
)
addElement(isolevel2fofcText)
var isolevelFofcText = createElement(
'span', {}, { bottom: '12px', left: '12px', color: 'lightgrey' }
)
addElement(isolevelFofcText)
var fileStructureText = createElement('span', {
innerText: 'structure file: none'
}, { bottom: '52px', right: '12px', color: 'lightgrey' })
addElement(fileStructureText)
var file2fofcText = createElement('span', {
innerText: '2fofc file: none'
}, { bottom: '32px', right: '12px', color: 'lightgrey' })
addElement(file2fofcText)
var fileFofcText = createElement('span', {
innerText: 'fofc file: none'
}, { bottom: '12px', right: '12px', color: 'lightgrey' })
addElement(fileFofcText)
stage.mouseControls.add('scroll', function () {
if (surf2fofc) {
var level2fofc = surf2fofc.getParameters().isolevel.toFixed(1)
isolevel2fofcText.innerText = '2fofc level: ' + level2fofc + '\u03C3'
}
if (surfFofc) {
var levelFofc = surfFofc.getParameters().isolevel.toFixed(1)
isolevelFofcText.innerText = 'fofc level: ' + levelFofc + '\u03C3'
}
})
function loadExample (id) {
var pl
if (id === '3ek3') {
pl = [
loadStructure('data://3ek3.cif'),
load2fofc('data://3ek3-2fofc.map.gz'),
loadFofc('data://3ek3-fofc.map.gz')
]
} else if (id === '3nzd') {
pl = [
loadStructure('data://3nzd.cif'),
load2fofc('data://3nzd.ccp4.gz'),
loadFofc('data://3nzd_diff.ccp4.gz')
]
} else if (id === '1lee') {
pl = [
loadStructure('data://1lee.pdb'),
load2fofc('data://1lee.ccp4'),
loadFofc('data://1lee_diff.ccp4')
]
}
exampleSelect.value = ''
return Promise.all(pl)
}
loadExample('3ek3')
| {
"pile_set_name": "Github"
} |
using System;
using System.Text.RegularExpressions;
namespace NBlog.Web.Application.Infrastructure
{
public static class StringExtensions
{
/// <summary>
/// Null if the string is empty, otherwise the original string.
/// (Useful to use with with null coalesce, e.g. myString.AsNullIfEmpty() ?? defaultString
/// </summary>
public static string AsNullIfEmpty(this string items)
{
return string.IsNullOrEmpty(items) ? null : items;
}
/// <summary>
/// Null if the string is empty or whitespace, otherwise the original string.
/// (Useful to use with with null coalesce, e.g. myString.AsNullIfWhiteSpace() ?? defaultString
/// </summary>
public static string AsNullIfWhiteSpace(this string items)
{
return string.IsNullOrWhiteSpace(items) ? null : items;
}
/// <summary>
/// Creates a URL friendly slug from a string
/// </summary>
public static string ToUrlSlug(this string str)
{
string originalValue = str;
// Repalce any characters that are not alphanumeric with hypen
str = Regex.Replace(str, "[^a-z^0-9]", "-", RegexOptions.IgnoreCase);
// Replace all double hypens with single hypen
string pattern = "--";
while (Regex.IsMatch(str, pattern))
str = Regex.Replace(str, pattern, "-", RegexOptions.IgnoreCase);
// Remove leading and trailing hypens ("-")
pattern = "^-|-$";
str = Regex.Replace(str, pattern, "", RegexOptions.IgnoreCase);
return str.ToLower();
}
/// <summary>
/// Combines two parts of a Uri similiar to Path.Combine
/// </summary>
/// <param name="val"></param>
/// <param name="append"></param>
/// <returns></returns>
public static string UriCombine(this string val, string append)
{
if (String.IsNullOrEmpty(val))
{
return append;
}
if (String.IsNullOrEmpty(append))
{
return val;
}
return val.TrimEnd('/') + "/" + append.TrimStart('/');
}
}
} | {
"pile_set_name": "Github"
} |
/* i80586 lshift
*
* Copyright (C) 1992, 1994, 1998,
* 2001, 2002 Free Software Foundation, Inc.
*
* This file is part of Libgcrypt.
*
* Libgcrypt is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as
* published by the Free Software Foundation; either version 2.1 of
* the License, or (at your option) any later version.
*
* Libgcrypt is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
*
* Note: This code is heavily based on the GNU MP Library.
* Actually it's the same code with only minor changes in the
* way the data is stored; this is to support the abstraction
* of an optional secure memory allocation which may be used
* to avoid revealing of sensitive data due to paging etc.
*/
#include "sysdep.h"
#include "asm-syntax.h"
/*******************
* mpi_limb_t
* _gcry_mpih_lshift( mpi_ptr_t wp, (sp + 4)
* mpi_ptr_t up, (sp + 8)
* mpi_size_t usize, (sp + 12)
* unsigned cnt) (sp + 16)
*/
.text
ALIGN (3)
.globl C_SYMBOL_NAME(_gcry_mpih_lshift)
C_SYMBOL_NAME(_gcry_mpih_lshift:)
pushl %edi
pushl %esi
pushl %ebx
pushl %ebp
movl 20(%esp),%edi /* res_ptr */
movl 24(%esp),%esi /* s_ptr */
movl 28(%esp),%ebp /* size */
movl 32(%esp),%ecx /* cnt */
/* We can use faster code for shift-by-1 under certain conditions. */
cmp $1,%ecx
jne Lnormal
leal 4(%esi),%eax
cmpl %edi,%eax
jnc Lspecial /* jump if s_ptr + 1 >= res_ptr */
leal (%esi,%ebp,4),%eax
cmpl %eax,%edi
jnc Lspecial /* jump if res_ptr >= s_ptr + size */
Lnormal:
leal -4(%edi,%ebp,4),%edi
leal -4(%esi,%ebp,4),%esi
movl (%esi),%edx
subl $4,%esi
xorl %eax,%eax
shldl %cl,%edx,%eax /* compute carry limb */
pushl %eax /* push carry limb onto stack */
decl %ebp
pushl %ebp
shrl $3,%ebp
jz Lend
movl (%edi),%eax /* fetch destination cache line */
ALIGN (2)
Loop: movl -28(%edi),%eax /* fetch destination cache line */
movl %edx,%ebx
movl (%esi),%eax
movl -4(%esi),%edx
shldl %cl,%eax,%ebx
shldl %cl,%edx,%eax
movl %ebx,(%edi)
movl %eax,-4(%edi)
movl -8(%esi),%ebx
movl -12(%esi),%eax
shldl %cl,%ebx,%edx
shldl %cl,%eax,%ebx
movl %edx,-8(%edi)
movl %ebx,-12(%edi)
movl -16(%esi),%edx
movl -20(%esi),%ebx
shldl %cl,%edx,%eax
shldl %cl,%ebx,%edx
movl %eax,-16(%edi)
movl %edx,-20(%edi)
movl -24(%esi),%eax
movl -28(%esi),%edx
shldl %cl,%eax,%ebx
shldl %cl,%edx,%eax
movl %ebx,-24(%edi)
movl %eax,-28(%edi)
subl $32,%esi
subl $32,%edi
decl %ebp
jnz Loop
Lend: popl %ebp
andl $7,%ebp
jz Lend2
Loop2: movl (%esi),%eax
shldl %cl,%eax,%edx
movl %edx,(%edi)
movl %eax,%edx
subl $4,%esi
subl $4,%edi
decl %ebp
jnz Loop2
Lend2: shll %cl,%edx /* compute least significant limb */
movl %edx,(%edi) /* store it */
popl %eax /* pop carry limb */
popl %ebp
popl %ebx
popl %esi
popl %edi
ret
/* We loop from least significant end of the arrays, which is only
permissable if the source and destination don't overlap, since the
function is documented to work for overlapping source and destination.
*/
Lspecial:
movl (%esi),%edx
addl $4,%esi
decl %ebp
pushl %ebp
shrl $3,%ebp
addl %edx,%edx
incl %ebp
decl %ebp
jz LLend
movl (%edi),%eax /* fetch destination cache line */
ALIGN (2)
LLoop: movl 28(%edi),%eax /* fetch destination cache line */
movl %edx,%ebx
movl (%esi),%eax
movl 4(%esi),%edx
adcl %eax,%eax
movl %ebx,(%edi)
adcl %edx,%edx
movl %eax,4(%edi)
movl 8(%esi),%ebx
movl 12(%esi),%eax
adcl %ebx,%ebx
movl %edx,8(%edi)
adcl %eax,%eax
movl %ebx,12(%edi)
movl 16(%esi),%edx
movl 20(%esi),%ebx
adcl %edx,%edx
movl %eax,16(%edi)
adcl %ebx,%ebx
movl %edx,20(%edi)
movl 24(%esi),%eax
movl 28(%esi),%edx
adcl %eax,%eax
movl %ebx,24(%edi)
adcl %edx,%edx
movl %eax,28(%edi)
leal 32(%esi),%esi /* use leal not to clobber carry */
leal 32(%edi),%edi
decl %ebp
jnz LLoop
LLend: popl %ebp
sbbl %eax,%eax /* save carry in %eax */
andl $7,%ebp
jz LLend2
addl %eax,%eax /* restore carry from eax */
LLoop2: movl %edx,%ebx
movl (%esi),%edx
adcl %edx,%edx
movl %ebx,(%edi)
leal 4(%esi),%esi /* use leal not to clobber carry */
leal 4(%edi),%edi
decl %ebp
jnz LLoop2
jmp LL1
LLend2: addl %eax,%eax /* restore carry from eax */
LL1: movl %edx,(%edi) /* store last limb */
sbbl %eax,%eax
negl %eax
popl %ebp
popl %ebx
popl %esi
popl %edi
ret
| {
"pile_set_name": "Github"
} |
#!/usr/bin/env python
#
# A library that provides a Python interface to the Telegram Bot API
# Copyright (C) 2015-2017
# Leandro Toledo de Souza <devs@python-telegram-bot.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser Public License for more details.
#
# You should have received a copy of the GNU Lesser Public License
# along with this program. If not, see [http://www.gnu.org/licenses/].
"""This module contains an object that represents a Telegram Invoice."""
from telegram import TelegramObject
class Invoice(TelegramObject):
"""This object contains basic information about an invoice.
Attributes:
title (:obj:`str`): Product name.
description (:obj:`str`): Product description.
start_parameter (:obj:`str`): Unique bot deep-linking parameter.
currency (:obj:`str`): Three-letter ISO 4217 currency code.
total_amount (:obj:`int`): Total price in the smallest units of the currency.
Args:
title (:obj:`str`): Product name.
description (:obj:`str`): Product description.
start_parameter (:obj:`str`): Unique bot deep-linking parameter that can be used to
generate this invoice.
currency (:obj:`str`): Three-letter ISO 4217 currency code.
total_amount (:obj:`int`): Total price in the smallest units of the currency (integer, not
float/double). For example, for a price of US$ 1.45 pass amount = 145.
**kwargs (:obj:`dict`): Arbitrary keyword arguments.
"""
def __init__(self, title, description, start_parameter, currency, total_amount, **kwargs):
self.title = title
self.description = description
self.start_parameter = start_parameter
self.currency = currency
self.total_amount = total_amount
@classmethod
def de_json(cls, data, bot):
if not data:
return None
return cls(**data)
| {
"pile_set_name": "Github"
} |
-----BEGIN CERTIFICATE-----
MIICjTCCAfigAwIBAgIEMaYgRzALBgkqhkiG9w0BAQQwRTELMAkGA1UEBhMCVVMx
NjA0BgNVBAoTLU5hdGlvbmFsIEFlcm9uYXV0aWNzIGFuZCBTcGFjZSBBZG1pbmlz
dHJhdGlvbjAmFxE5NjA1MjgxMzQ5MDUrMDgwMBcROTgwNTI4MTM0OTA1KzA4MDAw
ZzELMAkGA1UEBhMCVVMxNjA0BgNVBAoTLU5hdGlvbmFsIEFlcm9uYXV0aWNzIGFu
ZCBTcGFjZSBBZG1pbmlzdHJhdGlvbjEgMAkGA1UEBRMCMTYwEwYDVQQDEwxTdGV2
ZSBTY2hvY2gwWDALBgkqhkiG9w0BAQEDSQAwRgJBALrAwyYdgxmzNP/ts0Uyf6Bp
miJYktU/w4NG67ULaN4B5CnEz7k57s9o3YY3LecETgQ5iQHmkwlYDTL2fTgVfw0C
AQOjgaswgagwZAYDVR0ZAQH/BFowWDBWMFQxCzAJBgNVBAYTAlVTMTYwNAYDVQQK
Ey1OYXRpb25hbCBBZXJvbmF1dGljcyBhbmQgU3BhY2UgQWRtaW5pc3RyYXRpb24x
DTALBgNVBAMTBENSTDEwFwYDVR0BAQH/BA0wC4AJODMyOTcwODEwMBgGA1UdAgQR
MA8ECTgzMjk3MDgyM4ACBSAwDQYDVR0KBAYwBAMCBkAwCwYJKoZIhvcNAQEEA4GB
AH2y1VCEw/A4zaXzSYZJTTUi3uawbbFiS2yxHvgf28+8Js0OHXk1H1w2d6qOHH21
X82tZXd/0JtG0g1T9usFFBDvYK8O0ebgz/P5ELJnBL2+atObEuJy1ZZ0pBDWINR3
WkDNLCGiTkCKp0F5EWIrVDwh54NNevkCQRZita+z4IBO
-----END CERTIFICATE-----
| {
"pile_set_name": "Github"
} |
EESchema-LIBRARY Version 2.3
#encoding utf-8
#
# 296-26322-1-ND
#
DEF 296-26322-1-ND VR 0 20 Y Y 1 F N
F0 "VR" 0 150 50 H V C CNN
F1 "296-26322-1-ND" 0 250 50 H V C CNN
F2 "digikey-footprints:SOT-23-3" 325 150 50 H I L CNN
F3 "" 200 300 60 H I L CNN
DRAW
S 200 100 -200 -200 1 1 0 f
X IN 1 -300 0 100 R 50 50 1 1 W
X OUT 2 300 0 100 L 50 50 1 1 O
X GND 3 0 -300 100 U 50 50 1 1 W
ENDDRAW
ENDDEF
#
#End Library
| {
"pile_set_name": "Github"
} |
#!/usr/bin/env python
# -*- coding:utf-8 -*-
# Author: Donny You(youansheng@gmail.com)
# Adapted from: https://github.com/zhanghang1989/PyTorch-Encoding/blob/master/encoding/parallel.py
import functools
import threading
import torch
import torch.cuda.comm as comm
from torch.autograd import Function
from torch.nn.parallel._functions import Broadcast
from torch.nn.parallel.data_parallel import DataParallel
from torch.nn.parallel.parallel_apply import get_a_var
from torch.nn.parallel.scatter_gather import gather
try:
from torch._six import container_abcs
except:
print("torch._six ImportError: Lower version of pytorch.")
from .scatter_gather import scatter_kwargs
class Reduce(Function):
@staticmethod
def forward(ctx, *inputs):
ctx.target_gpus = [inputs[i].get_device() for i in range(len(inputs))]
inputs = sorted(inputs, key=lambda i: i.get_device())
return comm.reduce_add(inputs)
@staticmethod
def backward(ctx, gradOutput):
return Broadcast.apply(ctx.target_gpus, gradOutput)
class ParallelModel(DataParallel):
"""
Example::
>>> net = ParallelModel(model, device_ids=[0, 1, 2])
>>> y = net(x)
"""
def __init__(self, module, device_ids=None, output_device=None, dim=0, gather_=True):
super(ParallelModel, self).__init__(module, device_ids, output_device, dim)
self.gather_ = gather_
def gather(self, outputs, output_device):
if self.gather_:
return gather(outputs, output_device, dim=self.dim)
return outputs
def scatter(self, inputs, kwargs, device_ids):
return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
class ParallelCriterion(DataParallel):
"""
Example::
>>> net = ParallelModel(model, device_ids=[0, 1, 2])
>>> criterion = ParallelCriterion(criterion, device_ids=[0, 1, 2])
>>> y = net(x)
>>> loss = criterion(y, target)
"""
def __init__(self, module, device_ids=None, output_device=None, dim=0):
super(ParallelCriterion, self).__init__(module, device_ids, output_device, dim)
def scatter(self, inputs, kwargs, device_ids):
return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
def forward(self, inputs, **kwargs):
# input should be already scatterd
# scattering the targets instead
if not self.device_ids:
return self.module(inputs, **kwargs)
kwargs = (kwargs, ) * len(inputs)
if len(self.device_ids) == 1:
return self.module(inputs[0], **kwargs[0])
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
# targets = tuple(targets_per_gpu[0] for targets_per_gpu in targets)
outputs = _criterion_parallel_apply(replicas, inputs, kwargs)
if isinstance(outputs[0], container_abcs.Mapping):
return {key: (Reduce.apply(*[d[key] for d in outputs]) / len(outputs)) for key in outputs[0]}
elif isinstance(outputs[0], container_abcs.Sequence):
transposed = zip(*outputs)
return [Reduce.apply(*samples) / len(outputs) for samples in transposed]
else:
return Reduce.apply(*outputs) / len(outputs)
def _criterion_parallel_apply(modules, inputs, kwargs_tup=None, devices=None):
assert len(modules) == len(inputs)
if kwargs_tup:
assert len(modules) == len(kwargs_tup)
else:
kwargs_tup = ({},) * len(modules)
if devices is not None:
assert len(modules) == len(devices)
else:
devices = [None] * len(modules)
lock = threading.Lock()
results = {}
grad_enabled = torch.is_grad_enabled()
def _worker(i, module, input, kwargs, device=None):
torch.set_grad_enabled(grad_enabled)
if device is None:
device = get_a_var(input).get_device()
try:
with torch.cuda.device(device):
output = module(input, **kwargs)
with lock:
results[i] = output
except Exception as e:
with lock:
results[i] = e
if len(modules) > 1:
threads = [threading.Thread(target=_worker,
args=(i, module, input, kwargs, device),)
for i, (module, input, kwargs, device) in
enumerate(zip(modules, inputs, kwargs_tup, devices))]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
else:
_worker(0, modules[0], inputs[0], kwargs_tup[0], devices[0])
outputs = []
for i in range(len(inputs)):
output = results[i]
if isinstance(output, Exception):
raise output
outputs.append(output)
return outputs
| {
"pile_set_name": "Github"
} |
/*
* Author: Garrett Barboza <garrett.barboza@kapricasecurity.com>
*
* Copyright (c) 2014 Kaprica Security, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*
*/
#include "libcgc.h"
#include "cgc_malloc.h"
#include "cgc_stdlib.h"
cgc_size_t size_class_limits[NUM_FREE_LISTS] = {
2, 3, 4, 8,
16, 24, 32, 48,
64, 96, 128, 192,
256, 384, 512, 768,
1024, 1536, 2048, 3072,
4096, 6144, 8192, 12288,
16384, 24576, 32768, 49152,
65536, 98304, 131072, INT32_MAX
};
struct blk_t *cgc_free_lists[NUM_FREE_LISTS] = {0};
static void cgc_remove_from_blist(struct blk_t *blk)
{
if (blk->prev)
blk->prev->next = blk->next;
if (blk->next)
blk->next->prev = blk->prev;
}
int cgc_get_size_class(cgc_size_t size)
{
int i;
for (i = 0; i < NUM_FREE_LISTS && size > size_class_limits[i]; i++);
return i;
}
void cgc_insert_into_flist(struct blk_t *blk)
{
int sc_i = cgc_get_size_class(blk->size);
blk->free = 1;
if (cgc_free_lists[sc_i] == NULL) {
cgc_free_lists[sc_i] = blk;
return;
}
blk->fsucc = cgc_free_lists[sc_i];
cgc_free_lists[sc_i]->fpred = blk;
cgc_free_lists[sc_i] = blk;
blk->fpred = NULL;
}
void cgc_remove_from_flist(struct blk_t *blk)
{
int sc_i = cgc_get_size_class(blk->size);
if (blk->fpred)
blk->fpred->fsucc = blk->fsucc;
if (blk->fsucc)
blk->fsucc->fpred = blk->fpred;
if (cgc_free_lists[sc_i] == blk)
cgc_free_lists[sc_i] = blk->fsucc;
blk->fsucc = NULL;
blk->fpred = NULL;
blk->free = 0;
}
void cgc_coalesce(struct blk_t *blk)
{
/* prev and next are free */
if (blk->prev && blk->prev->free && blk->next && blk->next->free) {
cgc_remove_from_flist(blk->prev);
cgc_remove_from_flist(blk->next);
cgc_remove_from_flist(blk);
blk->prev->size += blk->size;
blk->prev->size += blk->next->size;
cgc_insert_into_flist(blk->prev);
cgc_remove_from_blist(blk->next);
cgc_remove_from_blist(blk);
/* Just prev is free */
} else if (blk->prev && blk->prev->free && blk->next && !blk->next->free) {
cgc_remove_from_flist(blk->prev);
cgc_remove_from_flist(blk);
blk->prev->size += blk->size;
cgc_insert_into_flist(blk->prev);
cgc_remove_from_blist(blk);
/* Just next is free */
} else if (blk->prev && !blk->prev->free && blk->next && blk->next->free) {
cgc_remove_from_flist(blk->next);
cgc_remove_from_flist(blk);
blk->size += blk->next->size;
cgc_insert_into_flist(blk);
cgc_remove_from_blist(blk->next);
}
}
| {
"pile_set_name": "Github"
} |
<?php
/*
* This file is part of ProgPilot, a static analyzer for security
*
* @copyright 2017 Eric Therond. All rights reserved
* @license MIT See LICENSE at the root of the project for more info
*/
namespace progpilot\Objects;
use PHPCfg\Op;
use PHPCfg\Script;
use progpilot\Objects\MyDefinition;
use progpilot\Dataflow\Definitions;
class MyFunction extends MyOp
{
const TYPE_FUNC_PROPERTY = 0x0001;
const TYPE_FUNC_STATIC = 0x0002;
const TYPE_FUNC_METHOD = 0x0004;
private $nbParams;
private $params;
private $returnDefs;
private $defs;
private $blocks;
private $visibility;
private $myClass;
private $instance;
private $blockId;
private $nameInstance;
private $thisDef;
private $backDef;
private $lastLine;
private $lastColumn;
private $lastBlockId;
private $isAnalyzed;
private $isDataAnalyzed;
private $myCode;
private $castReturn;
public $property;
public function __construct($name)
{
parent::__construct($name, 0, 0);
$this->params = [];
$this->returnDefs = [];
$this->visibility = "public";
$this->myclass = null;
$this->nameInstance = null;
$this->thisDef = null;
$this->backDef = null;
$this->blockId = 0;
$this->nbParams = 0;
$this->lastLine = 0;
$this->lastColumn = 0;
$this->lastBlockId = 0;
$this->isAnalyzed = false;
$this->isDataAnalyzed = false;
$this->property = new MyProperty;
$this->defs = new Definitions;
$this->blocks = new \SplObjectStorage;
$this->myCode = new \progpilot\Code\MyCode;
$this->castReturn = MyDefinition::CAST_NOT_SAFE;
}
public function __clone()
{
$this->property = clone $this->property;
$this->blocks = clone $this->blocks;
$this->defs = clone $this->defs;
}
public function setIsDataAnalyzed($isDataAnalyzed)
{
$this->isDataAnalyzed = $isDataAnalyzed;
}
public function isDataAnalyzed()
{
return $this->isDataAnalyzed;
}
public function setIsAnalyzed($isAnalyzed)
{
$this->isAnalyzed = $isAnalyzed;
}
public function isAnalyzed()
{
return $this->isAnalyzed;
}
public function setMyCode($myCode)
{
$this->myCode = $myCode;
}
public function getMyCode()
{
return $this->myCode;
}
public function setLastLine($lastLine)
{
$this->lastLine = $lastLine;
}
public function setLastColumn($lastColumn)
{
$this->lastColumn = $lastColumn;
}
public function setLastBlockId($lastBlockId)
{
$this->lastBlockId = $lastBlockId;
}
public function getLastLine()
{
return $this->lastLine;
}
public function getLastColumn()
{
return $this->lastColumn;
}
public function getLastBlockId()
{
return $this->lastBlockId;
}
public function getMyClass()
{
return $this->myclass;
}
public function setMyClass($myClass)
{
$this->myclass = $myClass;
}
public function getThisDef()
{
return $this->thisDef;
}
public function setThisDef($thisDef)
{
$this->thisDef = $thisDef;
}
public function getBackDef()
{
return $this->backDef;
}
public function setBackDef($backDef)
{
$this->backDef = $backDef;
}
public function getNameInstance()
{
return $this->nameInstance;
}
public function setNameInstance($nameInstance)
{
$this->nameInstance = $nameInstance;
}
public function setVisibility($visibility)
{
$this->visibility = $visibility;
}
public function getVisibility()
{
return $this->visibility;
}
public function setBlocks($blocks)
{
$this->blocks = $blocks;
}
public function getBlocks()
{
return $this->blocks;
}
public function setDefs($defs)
{
$this->defs = $defs;
}
public function getDefs()
{
return $this->defs;
}
public function addParam($param)
{
$this->params[] = $param;
}
public function getParams()
{
return $this->params;
}
public function setNbParams($nbParams)
{
$this->nbParams = $nbParams;
}
public function getNbParams()
{
return $this->nbParams;
}
public function getParam($i)
{
if (isset($this->params[$i])) {
return $this->params[$i];
}
return null;
}
public function getReturnDefs()
{
return $this->returnDefs;
}
public function addReturnDef($return_def)
{
$this->returnDefs[] = $return_def;
}
public function getBlockId()
{
return $this->blockId;
}
public function setBlockId($blockId)
{
$this->blockId = $blockId;
}
public function setCastReturn($cast)
{
$this->castReturn = $cast;
}
public function getCastReturn()
{
return $this->castReturn;
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2013 The Error Prone Authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.google.errorprone.refaster;
import com.google.auto.value.AutoValue;
import com.google.common.collect.ImmutableList;
import com.sun.source.tree.ModifiersTree;
import com.sun.source.tree.TreeVisitor;
import com.sun.tools.javac.code.Flags;
import com.sun.tools.javac.tree.JCTree.JCAnnotation;
import com.sun.tools.javac.tree.JCTree.JCModifiers;
import com.sun.tools.javac.util.List;
import java.util.Set;
import javax.lang.model.element.Modifier;
/**
* {@code UTree} representation of a {@code ModifiersTree}.
*
* @author lowasser@google.com (Louis Wasserman)
*/
@AutoValue
abstract class UModifiers extends UTree<JCModifiers> implements ModifiersTree {
public static UModifiers create(long flagBits, UAnnotation... annotations) {
return create(flagBits, ImmutableList.copyOf(annotations));
}
public static UModifiers create(long flagBits, Iterable<? extends UAnnotation> annotations) {
return new AutoValue_UModifiers(flagBits, ImmutableList.copyOf(annotations));
}
abstract long flagBits();
@Override
public abstract ImmutableList<UAnnotation> getAnnotations();
@Override
public JCModifiers inline(Inliner inliner) throws CouldNotResolveImportException {
return inliner
.maker()
.Modifiers(
flagBits(), List.convert(JCAnnotation.class, inliner.inlineList(getAnnotations())));
}
@Override
public Choice<Unifier> visitModifiers(ModifiersTree modifier, Unifier unifier) {
return Choice.condition(getFlags().equals(modifier.getFlags()), unifier);
}
@Override
public <R, D> R accept(TreeVisitor<R, D> visitor, D data) {
return visitor.visitModifiers(this, data);
}
@Override
public Kind getKind() {
return Kind.MODIFIERS;
}
@Override
public Set<Modifier> getFlags() {
return Flags.asModifierSet(flagBits());
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved.
*/
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.sun.org.apache.xalan.internal.xsltc.compiler;
import com.sun.org.apache.bcel.internal.generic.ConstantPoolGen;
import com.sun.org.apache.bcel.internal.generic.INVOKESTATIC;
import com.sun.org.apache.bcel.internal.generic.INVOKEVIRTUAL;
import com.sun.org.apache.bcel.internal.generic.InstructionList;
import com.sun.org.apache.bcel.internal.generic.PUSH;
import com.sun.org.apache.xalan.internal.xsltc.compiler.util.ClassGenerator;
import com.sun.org.apache.xalan.internal.xsltc.compiler.util.ErrorMsg;
import com.sun.org.apache.xalan.internal.xsltc.compiler.util.MethodGenerator;
import com.sun.org.apache.xalan.internal.xsltc.compiler.util.StringType;
import com.sun.org.apache.xalan.internal.xsltc.compiler.util.Type;
import com.sun.org.apache.xalan.internal.xsltc.compiler.util.TypeCheckError;
import com.sun.org.apache.xalan.internal.xsltc.compiler.util.Util;
import jdk.xml.internal.JdkXmlFeatures;
/**
* @author Morten Jorgensen
*/
final class TransletOutput extends Instruction {
private Expression _filename;
private boolean _append;
/**
* Displays the contents of this <xsltc:output> element.
*/
public void display(int indent) {
indent(indent);
Util.println("TransletOutput: " + _filename);
}
/**
* Parse the contents of this <xsltc:output> element. The only attribute
* we recognise is the 'file' attribute that contains teh output filename.
*/
public void parseContents(Parser parser) {
// Get the output filename from the 'file' attribute
String filename = getAttribute("file");
// If the 'append' attribute is set to "yes" or "true",
// the output is appended to the file.
String append = getAttribute("append");
// Verify that the filename is in fact set
if ((filename == null) || (filename.equals(EMPTYSTRING))) {
reportError(this, parser, ErrorMsg.REQUIRED_ATTR_ERR, "file");
}
// Save filename as an attribute value template
_filename = AttributeValue.create(this, filename, parser);
if (append != null && (append.toLowerCase().equals("yes") ||
append.toLowerCase().equals("true"))) {
_append = true;
}
else
_append = false;
parseChildren(parser);
}
/**
* Type checks the 'file' attribute (must be able to convert it to a str).
*/
public Type typeCheck(SymbolTable stable) throws TypeCheckError {
final Type type = _filename.typeCheck(stable);
if (type instanceof StringType == false) {
_filename = new CastExpr(_filename, Type.String);
}
typeCheckContents(stable);
return Type.Void;
}
/**
* Compile code that opens the give file for output, dumps the contents of
* the element to the file, then closes the file.
*/
public void translate(ClassGenerator classGen, MethodGenerator methodGen) {
final ConstantPoolGen cpg = classGen.getConstantPool();
final InstructionList il = methodGen.getInstructionList();
final boolean isSecureProcessing = classGen.getParser().getXSLTC()
.isSecureProcessing();
final boolean isExtensionFunctionEnabled = classGen.getParser().getXSLTC()
.getFeature(JdkXmlFeatures.XmlFeature.ENABLE_EXTENSION_FUNCTION);
if (isSecureProcessing && !isExtensionFunctionEnabled) {
int index = cpg.addMethodref(BASIS_LIBRARY_CLASS,
"unallowed_extension_elementF",
"(Ljava/lang/String;)V");
il.append(new PUSH(cpg, "redirect"));
il.append(new INVOKESTATIC(index));
return;
}
// Save the current output handler on the stack
il.append(methodGen.loadHandler());
final int open = cpg.addMethodref(TRANSLET_CLASS,
"openOutputHandler",
"(" + STRING_SIG + "Z)" +
TRANSLET_OUTPUT_SIG);
final int close = cpg.addMethodref(TRANSLET_CLASS,
"closeOutputHandler",
"("+TRANSLET_OUTPUT_SIG+")V");
// Create the new output handler (leave it on stack)
il.append(classGen.loadTranslet());
_filename.translate(classGen, methodGen);
il.append(new PUSH(cpg, _append));
il.append(new INVOKEVIRTUAL(open));
// Overwrite current handler
il.append(methodGen.storeHandler());
// Translate contents with substituted handler
translateContents(classGen, methodGen);
// Close the output handler (close file)
il.append(classGen.loadTranslet());
il.append(methodGen.loadHandler());
il.append(new INVOKEVIRTUAL(close));
// Restore old output handler from stack
il.append(methodGen.storeHandler());
}
}
| {
"pile_set_name": "Github"
} |
package cu
import (
"log"
"runtime"
"testing"
"unsafe"
)
func TestBatchContext(t *testing.T) {
log.Print("BatchContext")
var err error
var dev Device
var cuctx CUContext
var mod Module
var fn Function
if dev, cuctx, err = testSetup(); err != nil {
if err.Error() == "NoDevice" {
return
}
t.Fatal(err)
}
if mod, err = LoadData(add32PTX); err != nil {
t.Fatalf("Cannot load add32: %v", err)
}
if fn, err = mod.Function("add32"); err != nil {
t.Fatalf("Cannot get add32(): %v", err)
}
ctx := newContext(cuctx)
bctx := NewBatchedContext(ctx, dev)
runtime.LockOSThread()
defer runtime.UnlockOSThread()
doneChan := make(chan struct{})
a := make([]float32, 1000)
b := make([]float32, 1000)
go func() {
for i := range b {
a[i] = 1
b[i] = 1
}
size := int64(len(a) * 4)
var memA, memB DevicePtr
if memA, err = bctx.AllocAndCopy(unsafe.Pointer(&a[0]), size); err != nil {
t.Fatalf("Cannot allocate A: %v", err)
}
if memB, err = bctx.MemAlloc(size); err != nil {
t.Fatalf("Cannot allocate B: %v", err)
}
args := []unsafe.Pointer{
unsafe.Pointer(&memA),
unsafe.Pointer(&memB),
unsafe.Pointer(&size),
}
bctx.MemcpyHtoD(memB, unsafe.Pointer(&b[0]), size)
bctx.LaunchKernel(fn, 1, 1, 1, len(a), 1, 1, 0, Stream{}, args)
bctx.Synchronize()
bctx.MemcpyDtoH(unsafe.Pointer(&a[0]), memA, size)
bctx.MemcpyDtoH(unsafe.Pointer(&b[0]), memB, size)
bctx.MemFree(memA)
bctx.MemFree(memB)
bctx.workAvailable <- struct{}{}
doneChan <- struct{}{}
}()
loop:
for {
select {
case <-bctx.workAvailable:
bctx.DoWork()
case <-doneChan:
break loop
}
}
if err = Synchronize(); err != nil {
t.Errorf("Failed to Sync %v", err)
}
for _, v := range a {
if v != float32(2) {
t.Errorf("Expected all values to be 2. %v", a)
break
}
}
mod.Unload()
cuctx.Destroy()
}
func TestLargeBatch(t *testing.T) {
log.Printf("Large batch")
var err error
var dev Device
var cuctx CUContext
var mod Module
var fn Function
if dev, cuctx, err = testSetup(); err != nil {
if err.Error() == "NoDevice" {
return
}
t.Fatal(err)
}
if mod, err = LoadData(add32PTX); err != nil {
t.Fatalf("Cannot load add32: %v", err)
}
if fn, err = mod.Function("add32"); err != nil {
t.Fatalf("Cannot get add32(): %v", err)
}
dev.TotalMem()
beforeFree, _, _ := MemInfo()
ctx := newContext(cuctx)
bctx := NewBatchedContext(ctx, dev)
runtime.LockOSThread()
defer runtime.UnlockOSThread()
doneChan := make(chan struct{})
a := make([]float32, 1000)
b := make([]float32, 1000)
for i := range b {
a[i] = 1
b[i] = 1
}
size := int64(len(a) * 4)
go func() {
var memA, memB DevicePtr
var frees []DevicePtr
for i := 0; i < 104729; i++ {
if memA, err = bctx.AllocAndCopy(unsafe.Pointer(&a[0]), size); err != nil {
t.Fatalf("Cannot allocate A: %v", err)
}
if memB, err = bctx.MemAlloc(size); err != nil {
t.Fatalf("Cannot allocate B: %v", err)
}
args := []unsafe.Pointer{
unsafe.Pointer(&memA),
unsafe.Pointer(&memB),
unsafe.Pointer(&size),
}
bctx.MemcpyHtoD(memB, unsafe.Pointer(&b[0]), size)
bctx.LaunchKernel(fn, 1, 1, 1, len(a), 1, 1, 0, Stream{}, args)
bctx.Synchronize()
if i%13 == 0 {
frees = append(frees, memA)
frees = append(frees, memB)
} else {
bctx.MemFree(memA)
bctx.MemFree(memB)
}
}
bctx.MemcpyDtoH(unsafe.Pointer(&a[0]), memA, size)
bctx.MemcpyDtoH(unsafe.Pointer(&b[0]), memB, size)
log.Printf("Number of frees %v", len(frees))
for _, free := range frees {
bctx.MemFree(free)
}
bctx.workAvailable <- struct{}{}
doneChan <- struct{}{}
}()
loop:
for {
select {
case <-bctx.workAvailable:
bctx.DoWork()
case <-doneChan:
break loop
default:
}
}
bctx.DoWork()
if err = Synchronize(); err != nil {
t.Errorf("Failed to Sync %v", err)
}
for _, v := range a {
if v != float32(2) {
t.Errorf("Expected all values to be 2. %v", a)
break
}
}
afterFree, _, _ := MemInfo()
if afterFree != beforeFree {
t.Errorf("Before: Freemem: %v. After %v | Diff %v", beforeFree, afterFree, (beforeFree-afterFree)/1024)
}
mod.Unload()
cuctx.Destroy()
}
func BenchmarkNoBatching(bench *testing.B) {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
var err error
var ctx CUContext
var mod Module
var fn Function
if _, ctx, err = testSetup(); err != nil {
if err.Error() == "NoDevice" {
return
}
bench.Fatal(err)
}
if mod, err = LoadData(add32PTX); err != nil {
bench.Fatalf("Cannot load add32: %v", err)
}
if fn, err = mod.Function("add32"); err != nil {
bench.Fatalf("Cannot get add32(): %v", err)
}
a := make([]float32, 1000000)
b := make([]float32, 1000000)
for i := range b {
a[i] = 1
b[i] = 1
}
size := int64(len(a) * 4)
var memA, memB DevicePtr
if memA, err = MemAlloc(size); err != nil {
bench.Fatalf("Failed to allocate for a: %v", err)
}
if memB, err = MemAlloc(size); err != nil {
bench.Fatalf("Failed to allocate for b: %v", err)
}
args := []unsafe.Pointer{
unsafe.Pointer(&memA),
unsafe.Pointer(&memB),
unsafe.Pointer(&size),
}
// ACTUAL BENCHMARK STARTS HERE
for i := 0; i < bench.N; i++ {
for j := 0; j < 100; j++ {
if err = MemcpyHtoD(memA, unsafe.Pointer(&a[0]), size); err != nil {
bench.Fatalf("Failed to copy memory from a: %v", err)
}
if err = MemcpyHtoD(memB, unsafe.Pointer(&b[0]), size); err != nil {
bench.Fatalf("Failed to copy memory from b: %v", err)
}
if err = fn.LaunchAndSync(100, 10, 1, 1000, 1, 1, 1, Stream{}, args); err != nil {
bench.Errorf("Launch and Sync Failed: %v", err)
}
if err = MemcpyDtoH(unsafe.Pointer(&a[0]), memA, size); err != nil {
bench.Fatalf("Failed to copy memory to a: %v", err)
}
if err = MemcpyDtoH(unsafe.Pointer(&b[0]), memB, size); err != nil {
bench.Fatalf("Failed to copy memory to b: %v", err)
}
}
}
MemFree(memA)
MemFree(memB)
mod.Unload()
ctx.Destroy()
}
func BenchmarkBatching(bench *testing.B) {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
var err error
var dev Device
var cuctx CUContext
var mod Module
var fn Function
if dev, cuctx, err = testSetup(); err != nil {
if err.Error() == "NoDevice" {
return
}
bench.Fatal(err)
}
if mod, err = LoadData(add32PTX); err != nil {
bench.Fatalf("Cannot load add32: %v", err)
}
if fn, err = mod.Function("add32"); err != nil {
bench.Fatalf("Cannot get add32(): %v", err)
}
a := make([]float32, 1000000)
b := make([]float32, 1000000)
for i := range b {
a[i] = 1
b[i] = 1
}
size := int64(len(a) * 4)
var memA, memB DevicePtr
if memA, err = MemAlloc(size); err != nil {
bench.Fatalf("Failed to allocate for a: %v", err)
}
if memB, err = MemAlloc(size); err != nil {
bench.Fatalf("Failed to allocate for b: %v", err)
}
ctx := newContext(cuctx)
bctx := NewBatchedContext(ctx, dev)
args := []unsafe.Pointer{
unsafe.Pointer(&memA),
unsafe.Pointer(&memB),
unsafe.Pointer(&size),
}
// ACTUAL BENCHMARK STARTS HERE
workAvailable := bctx.WorkAvailable()
for i := 0; i < bench.N; i++ {
for j := 0; j < 100; j++ {
select {
case <-workAvailable:
bctx.DoWork()
default:
bctx.MemcpyHtoD(memA, unsafe.Pointer(&a[0]), size)
bctx.MemcpyHtoD(memB, unsafe.Pointer(&b[0]), size)
bctx.LaunchKernel(fn, 100, 10, 1, 1000, 1, 1, 0, Stream{}, args)
bctx.Synchronize()
bctx.MemcpyDtoH(unsafe.Pointer(&a[0]), memA, size)
bctx.MemcpyDtoH(unsafe.Pointer(&b[0]), memB, size)
}
}
}
MemFree(memA)
MemFree(memB)
mod.Unload()
cuctx.Destroy()
}
| {
"pile_set_name": "Github"
} |
"""
Metadata for all sources.
"""
# Author: Prabhu Ramachandran <prabhu@aero.iitb.ac.in>
# Copyright (c) 2008, Prabhu Ramachandran Enthought, Inc.
# License: BSD Style.
# Local imports.
from mayavi.core.metadata import SourceMetadata
from mayavi.core.pipeline_info import PipelineInfo
BASE = 'mayavi.sources'
open_3ds = SourceMetadata(
id = "3DSFile",
class_name = BASE + ".three_ds_importer.ThreeDSImporter",
tooltip = "Import a 3D Studio file",
desc = "Import a 3D Studio file",
help = "Import a 3D Studio file",
menu_name = "&3D Studio file",
extensions = ['3ds'],
wildcard = '3D Studio files (*.3ds)|*.3ds',
output_info = PipelineInfo(datasets=['none'],
attribute_types=['any'],
attributes=['any'])
)
open_image = SourceMetadata(
id = "ImageFile",
class_name = BASE + ".image_reader.ImageReader",
menu_name = "&Image file (PNG/JPG/BMP/PNM/TIFF/DEM/DCM/XIMG/MHA/MHD/MINC)",
tooltip = "Import a PNG/JPG/BMP/PNM/TIFF/DCM/DEM/XIMG/MHA/MHD/MINC image",
desc = "Import a PNG/JPG/BMP/PNM/TIFF/DCM/DEM/XIMG/MHA/MHD/MINC image",
extensions = ['png', 'jpg', 'jpeg', 'bmp', 'pnm', 'tiff', 'dcm', 'dem',
'ximg', 'mha', 'mhd', 'mnc'],
wildcard = 'PNG files (*.png)|*.png|'\
'JPEG files (*.jpg)|*.jpg|'\
'JPEG files (*.jpeg)|*.jpeg|'\
'BMP files (*.bmp)|*.bmp|'\
'PNM files (*.pnm)|*.pnm|'\
'DCM files (*.dcm)|*.dcm|'\
'DEM files (*.dem)|*.dem|'\
'Meta mha files (*.mha)|*.mha|'\
'Meta mhd files (*.mhd)|*.mhd|'\
'MINC files (*.mnc)|*.mnc|'\
'XIMG files (*.ximg)|*.ximg|'\
'TIFF files (*.tiff)|*.tiff',
output_info = PipelineInfo(datasets=['image_data'],
attribute_types=['any'],
attributes=['any'])
)
open_poly_data = SourceMetadata(
id = "PolyDataFile",
class_name = BASE + ".poly_data_reader.PolyDataReader",
menu_name = "&PolyData file (STL/STLA/STLB/TXT/RAW/PLY/PDB/SLC/FACET\
/OBJ/BYU/XYZ/CUBE)",
tooltip = "Import a STL/STLA/STLB/TXT/RAW/PLY/PDB/SLC/FACET/OBJ/\
BYU/XYZ/CUBE Poly Data",
desc = "Import a STL/STLA/STLB/TXT/RAWPLY/PDB/SLC/FACET/OBJ/BYU/XYZ/\
CUBE Poly Data",
extensions = ['stl', 'stla', 'stlb', 'txt', 'raw', 'ply', 'pdb', 'slc',
'facet', 'xyz', 'cube', 'obj', 'g'],
wildcard = 'STL files (*.stl)|*.stl|'\
'STLA files (*.stla)|*.stla|'\
'STLB files (*.stlb)|*.stlb|'\
'BYU files (*.g)|*.g|'\
'TXT files (*.txt)|*.txt|'\
'RAW files (*.raw)|*.raw|'\
'PLY files (*.ply)|*.ply|'\
'PDB files (*.pdb)|*.pdb|'\
'SLC files (*.slc)|*.slc|'\
'XYZ files (*.xyz)|*.xyz|'\
'CUBE files (*.cube)|*.cube|'\
'FACET files (*.facet)|*.facet|'\
'OBJ files (*.obj)|*.obj',
can_read_test = 'mayavi.sources.poly_data_reader:PolyDataReader.can_read',
output_info = PipelineInfo(datasets=['poly_data'],
attribute_types=['any'],
attributes=['any'])
)
open_ugrid_data = SourceMetadata(
id = "VTKUnstructuredFile",
class_name = BASE + ".unstructured_grid_reader.UnstructuredGridReader",
menu_name = "&Unstrucured Grid fil (INP/NEU/EXII)",
tooltip = "Open a Unstrucured Grid file",
desc = "Open a Unstrucured Grid file",
help = "Open a Unstrucured Grid file",
extensions = ['inp', 'neu', 'exii'],
wildcard = 'AVSUCD INP files (*.inp)|*.inp|'\
'GAMBIT NEU (*.neu)|*.neu|'\
'EXODUS EXII (*.exii)|*.exii',
output_info = PipelineInfo(datasets=['any'],
attribute_types=['any'],
attributes=['any'])
)
open_plot3d = SourceMetadata(
id = "PLOT3DFile",
class_name = BASE + ".plot3d_reader.PLOT3DReader",
menu_name = "&PLOT3D file",
tooltip = "Open a PLOT3D data data",
desc = "Open a PLOT3D data data",
help = "Open a PLOT3D data data",
extensions = ['xyz'],
wildcard = 'PLOT3D files (*.xyz)|*.xyz',
output_info = PipelineInfo(datasets=['structured_grid'],
attribute_types=['any'],
attributes=['any'])
)
open_vrml = SourceMetadata(
id = "VRMLFile",
class_name = BASE + ".vrml_importer.VRMLImporter",
menu_name = "V&RML2 file",
tooltip = "Import a VRML2 data file",
desc = "Import a VRML2 data file",
help = "Import a VRML2 data file",
extensions = ['wrl'],
wildcard = 'VRML2 files (*.wrl)|*.wrl',
output_info = PipelineInfo(datasets=['none'],
attribute_types=['any'],
attributes=['any'])
)
open_vtk = SourceMetadata(
id = "VTKFile",
class_name = BASE + ".vtk_file_reader.VTKFileReader",
menu_name = "&VTK file",
tooltip = "Open a VTK data file",
desc = "Open a VTK data file",
help = "Open a VTK data file",
extensions = ['vtk'],
wildcard = 'VTK files (*.vtk)|*.vtk',
output_info = PipelineInfo(datasets=['any'],
attribute_types=['any'],
attributes=['any'])
)
open_vtk_xml = SourceMetadata(
id = "VTKXMLFile",
class_name = BASE + ".vtk_xml_file_reader.VTKXMLFileReader",
menu_name = "VTK &XML file",
tooltip = "Open a VTK XML data file",
desc = "Open a VTK XML data file",
help = "Open a VTK XML data file",
extensions = ['xml', 'vti', 'vtp', 'vtr', 'vts', 'vtu',
'pvti', 'pvtp', 'pvtr', 'pvts', 'pvtu'],
wildcard = 'VTK XML files (*.xml)|*.xml|'\
'Image Data (*.vti)|*.vti|'\
'Poly Data (*.vtp)|*.vtp|'\
'Rectilinear Grid (*.vtr)|*.vtr|'\
'Structured Grid (*.vts)|*.vts|'\
'Unstructured Grid (*.vtu)|*.vtu|'\
'Parallel Image Data (*.pvti)|*.pvti|'\
'Parallel Poly Data (*.pvtp)|*.pvtp|'\
'Parallel Rectilinear Grid (*.pvtr)|*.pvtr|'\
'Parallel Structured Grid (*.pvts)|*.pvts|'\
'Parallel Unstructured Grid (*.pvtu)|*.pvtu',
output_info = PipelineInfo(datasets=['any'],
attribute_types=['any'],
attributes=['any'])
)
parametric_surface = SourceMetadata(
id = "ParametricSurfaceSource",
class_name = BASE + ".parametric_surface.ParametricSurface",
menu_name = "&Create Parametric surface source",
tooltip = "Create a parametric surface source",
desc = "Create a parametric surface source",
help = "Create a parametric surface source",
extensions = [],
wildcard = '',
output_info = PipelineInfo(datasets=['poly_data'],
attribute_types=['any'],
attributes=['any'])
)
point_load = SourceMetadata(
id = "PointLoadSource",
class_name = BASE + ".point_load.PointLoad",
menu_name = "Create Point &load source",
tooltip = "Simulates a point load on a cube of data (for tensors)",
desc = "Simulates a point load on a cube of data (for tensors)",
help = "Simulates a point load on a cube of data (for tensors)",
extensions = [],
wildcard = '',
output_info = PipelineInfo(datasets=['image_data'],
attribute_types=['any'],
attributes=['any'])
)
builtin_surface = SourceMetadata(
id = "BuiltinSurfaceSource",
class_name = BASE + ".builtin_surface.BuiltinSurface",
menu_name = "Create built-in &surface",
tooltip = "Create a vtk poly data source",
desc = "Create a vtk poly data source",
help = "Create a vtk poly data source",
extensions = [],
wildcard = '',
output_info = PipelineInfo(datasets=['poly_data'],
attribute_types=['any'],
attributes=['any'])
)
builtin_image = SourceMetadata(
id = "BuiltinImageSource",
class_name = BASE + ".builtin_image.BuiltinImage",
menu_name = "Create built-in &image",
tooltip = "Create a vtk image data source",
desc = "Create a vtk image data source",
help = "Create a vtk image data source",
extensions = [],
wildcard = '',
output_info = PipelineInfo(datasets=['image_data'],
attribute_types=['any'],
attributes=['any'])
)
open_volume = SourceMetadata(
id = "VolumeFile",
class_name = BASE + ".volume_reader.VolumeReader",
menu_name = "&Volume file",
tooltip = "Open a Volume file",
desc = "Open a Volume file",
help = "Open a Volume file",
extensions = [],
wildcard = '',
output_info = PipelineInfo(datasets=['image_data'],
attribute_types=['any'],
attributes=['any'])
)
open_chaco = SourceMetadata(
id = "ChacoFile",
class_name = BASE + ".chaco_reader.ChacoReader",
menu_name = "&Chaco file",
tooltip = "Open a Chaco file",
desc = "Open a Chaco file",
help = "Open a Chaco file",
extensions = [],
wildcard = '',
output_info = PipelineInfo(datasets=['unstructured_grid'],
attribute_types=['any'],
attributes=['any'])
)
# Now collect all the sources for the mayavi registry.
sources = [open_3ds,
open_image,
open_plot3d,
open_vrml,
open_vtk,
open_vtk_xml,
parametric_surface,
point_load,
builtin_surface,
builtin_image,
open_poly_data,
open_ugrid_data,
open_volume,
open_chaco,
]
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2017 TWO SIGMA OPEN SOURCE, LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.twosigma.flint.rdd.function.summarize
import com.twosigma.flint.rdd.function.summarize.summarizer.subtractable.LeftSubtractableSummarizer
import com.twosigma.flint.rdd.function.summarize.summarizer.subtractable.{ SumSummarizer => SumSum }
import scala.Serializable
import org.scalatest.FlatSpec
import org.scalactic.{ TolerantNumerics, Equality }
import com.twosigma.flint.SharedSparkContext
import com.twosigma.flint.rdd.{ KeyPartitioningType, OrderedRDD }
class SummarizationsSpec extends FlatSpec with SharedSparkContext {
val data = Array(
(1000L, (1, 0.01)),
(1000L, (2, 0.01)),
(1005L, (1, 0.01)),
(1005L, (2, 0.01)),
(1010L, (1, 0.01)),
(1010L, (2, 0.01)),
(1015L, (1, 0.01)),
(1015L, (2, 0.01)),
(1020L, (1, 0.01)),
(1020L, (2, 0.01)),
(1025L, (1, 0.01)),
(1025L, (2, 0.01)),
(1030L, (1, 0.01)),
(1030L, (2, 0.01)),
(1035L, (1, 0.01)),
(1035L, (2, 0.01)),
(1040L, (1, 0.01)),
(1040L, (2, 0.01)),
(1045L, (1, 0.01)),
(1045L, (2, 0.01))
)
val expected = List(
(1000, ((1, 0.01), 0.01)),
(1000, ((2, 0.01), 0.02)),
(1005, ((1, 0.01), 0.03)),
(1005, ((2, 0.01), 0.04)),
(1010, ((1, 0.01), 0.05)),
(1010, ((2, 0.01), 0.06)),
(1015, ((1, 0.01), 0.07)),
(1015, ((2, 0.01), 0.08)),
(1020, ((1, 0.01), 0.09)),
(1020, ((2, 0.01), 0.10)),
(1025, ((1, 0.01), 0.11)),
(1025, ((2, 0.01), 0.12)),
(1030, ((1, 0.01), 0.13)),
(1030, ((2, 0.01), 0.14)),
(1035, ((1, 0.01), 0.15)),
(1035, ((2, 0.01), 0.16)),
(1040, ((1, 0.01), 0.17)),
(1040, ((2, 0.01), 0.18)),
(1045, ((1, 0.01), 0.19)),
(1045, ((2, 0.01), 0.20))
)
val expectedPerSK = List(
(1000, ((1, 0.01), 0.01)),
(1000, ((2, 0.01), 0.01)),
(1005, ((1, 0.01), 0.02)),
(1005, ((2, 0.01), 0.02)),
(1010, ((1, 0.01), 0.03)),
(1010, ((2, 0.01), 0.03)),
(1015, ((1, 0.01), 0.04)),
(1015, ((2, 0.01), 0.04)),
(1020, ((1, 0.01), 0.05)),
(1020, ((2, 0.01), 0.05)),
(1025, ((1, 0.01), 0.06)),
(1025, ((2, 0.01), 0.06)),
(1030, ((1, 0.01), 0.07)),
(1030, ((2, 0.01), 0.07)),
(1035, ((1, 0.01), 0.08)),
(1035, ((2, 0.01), 0.08)),
(1040, ((1, 0.01), 0.09)),
(1040, ((2, 0.01), 0.09)),
(1045, ((1, 0.01), 0.10)),
(1045, ((2, 0.01), 0.10))
)
var orderedRDD: OrderedRDD[Long, (Int, Double)] = _
var orderedRDD1: OrderedRDD[Long, (Int, Double)] = _
implicit val doubleEq = TolerantNumerics.tolerantDoubleEquality(1.0e-6)
implicit val equality = new Equality[List[Double]] {
override def areEqual(a: List[Double], b: Any): Boolean = {
(a, b) match {
case (Nil, Nil) => true
case (x :: xs, y :: ys) => x === y && areEqual(xs, ys)
case _ => false
}
}
}
override def beforeAll() {
super.beforeAll()
orderedRDD = OrderedRDD.fromRDD(sc.parallelize(data, 4), KeyPartitioningType.Sorted)
orderedRDD1 = OrderedRDD.fromRDD(sc.parallelize(data, 1), KeyPartitioningType.Sorted)
}
"Summarizations" should "apply correctly" in {
val summarizer = KVSumSummarizer()
val key = { case ((sk, v)) => None }: ((Int, Double)) => Option[Nothing]
val ret = Summarizations(orderedRDD, summarizer, key)
val ret1 = Summarizations(orderedRDD1, summarizer, key)
assert(ret.collect().toList.map(_._2._2) === expected.map(_._2._2))
assert(ret1.collect().toList.map(_._2._2) === expected.map(_._2._2))
}
it should "apply perSecondaryKey == true correctly" in {
val summarizer = KVSumSummarizer()
val key = { case ((sk, v)) => sk }: ((Int, Double)) => Int
val ret = Summarizations(orderedRDD, summarizer, key)
val ret1 = Summarizations(orderedRDD1, summarizer, key)
assert(ret.collect().toList.map(_._2._2) === expectedPerSK.map(_._2._2))
assert(ret1.collect().toList.map(_._2._2) === expectedPerSK.map(_._2._2))
}
}
| {
"pile_set_name": "Github"
} |
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
rest "k8s.io/client-go/rest"
)
// SelfSubjectAccessReviewsGetter has a method to return a SelfSubjectAccessReviewInterface.
// A group's client should implement this interface.
type SelfSubjectAccessReviewsGetter interface {
SelfSubjectAccessReviews() SelfSubjectAccessReviewInterface
}
// SelfSubjectAccessReviewInterface has methods to work with SelfSubjectAccessReview resources.
type SelfSubjectAccessReviewInterface interface {
SelfSubjectAccessReviewExpansion
}
// selfSubjectAccessReviews implements SelfSubjectAccessReviewInterface
type selfSubjectAccessReviews struct {
client rest.Interface
}
// newSelfSubjectAccessReviews returns a SelfSubjectAccessReviews
func newSelfSubjectAccessReviews(c *AuthorizationV1Client) *selfSubjectAccessReviews {
return &selfSubjectAccessReviews{
client: c.RESTClient(),
}
}
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Pantalla de opciones de navegación forzada</title>
</head>
<body bgcolor="#ffffff">
<h1>Pantalla de opciones de navegación forzada</h1>
<p>
Esta pantalla le permite configurar las opciones de navegación forzada:
<h3>Subprocesos de escaneo concurrentes por host</h3>
El número de subprocesos que el escáner puede utilizará por host.
<br>Aumentar el número de subprocesos acelerará el análisis pero puede poner presión extra en la computadora en la que ZAP está funcionando y el host objetivo.
<h3>Recursiva</h3>
Si está marcada el escáner volverá a pasar por todos los sub-directorios encontrados.
<br>Esto puede tomar mucho tiempo.
<h3>Archivo por defecto</h3>
El archivo por defecto seleccionado cuando ZAP inicia.
<h3>Añadir archivo personalizado de Navegación Forzada</h3>
Permite añadir sus propios archivos para ser usados al ejecutar fuerza bruta en archivos y directorios.
<br>Deben ser archivos de texto con un nombre de archivo o directorio por línea.
<br> Los archivos se agregan al directorio "dirbuster" bajo el directorio inicial de ZAP.
</body>
</html>
| {
"pile_set_name": "Github"
} |
version: '2'
services:
nsqlookupd01:
image: nsqio/nsq:v1.0.0-compat
command: /nsqlookupd
labels:
io.rancher.scheduler.affinity:host_label_soft: nsqlookupd=true
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/nsqlookupd02,io.rancher.stack_service.name=$${stack_name}/nsqlookupd03
nsqlookupd02:
image: nsqio/nsq:v1.0.0-compat
command: /nsqlookupd
labels:
io.rancher.scheduler.affinity:host_label_soft: nsqlookupd=true
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/nsqlookupd01,io.rancher.stack_service.name=$${stack_name}/nsqlookupd03
nsqlookupd03:
image: nsqio/nsq:v1.0.0-compat
command: /nsqlookupd
labels:
io.rancher.scheduler.affinity:host_label_soft: nsqlookupd=true
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/nsqlookupd01,io.rancher.stack_service.name=$${stack_name}/nsqlookupd02
nsqd:
image: nsqio/nsq:v1.0.0-compat
command:
- /bin/sh
- -c
- nsqd --data-path=/data --lookupd-tcp-address=nsqlookupd01:4160 --lookupd-tcp-address=nsqlookupd02:4160 --lookupd-tcp-address=nsqlookupd03:4160 -broadcast-address=$$HOSTNAME
labels:
io.rancher.scheduler.affinity:host_label_soft: nsqd=true
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.sidekicks: data
io.rancher.container.hostname_override: container_name
volumes_from:
- data
nsqadmin:
image: nsqio/nsq:v1.0.0-compat
command: /nsqadmin --lookupd-http-address=nsqlookupd01:4161 --lookupd-http-address=nsqlookupd02:4161 --lookupd-http-address=nsqlookupd03:4161
labels:
io.rancher.scheduler.affinity:host_label_soft: nsqadmin=true
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
nsq-lb:
image: rancher/lb-service-haproxy:v0.7.9
ports:
- 4150:4150/tcp
- 4151:4151/tcp
- 4171:4171/tcp
labels:
io.rancher.scheduler.global: "true"
io.rancher.scheduler.affinity:host_label: nsq-lb=true
data:
image: busybox
command: /bin/true
volumes:
- /data
labels:
io.rancher.container.start_once: 'true'
| {
"pile_set_name": "Github"
} |
To show `assets/www` or `res/xml/config.xml`, go to:
Project -> Properties -> Resource -> Resource Filters
And delete the exclusion filter.
| {
"pile_set_name": "Github"
} |
/* -*- Mode: C; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- */
/* Duda I/O
* --------
* Copyright (C) 2012-2014, Eduardo Silva P. <edsiper@gmail.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Library General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#include <arpa/inet.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <sys/epoll.h>
#include <netdb.h>
#include "duda_api.h"
#include "duda_package.h"
#include "ssls.h"
#if (POLARSSL_VERSION_NUMBER < 0x01020000)
static int ssls_ciphersuites[] =
{
SSL_EDH_RSA_AES_256_SHA,
SSL_EDH_RSA_CAMELLIA_256_SHA,
SSL_EDH_RSA_AES_128_SHA,
SSL_EDH_RSA_CAMELLIA_128_SHA,
SSL_RSA_AES_256_SHA,
SSL_RSA_CAMELLIA_256_SHA,
SSL_RSA_AES_128_SHA,
SSL_RSA_CAMELLIA_128_SHA,
SSL_RSA_RC4_128_SHA,
SSL_RSA_RC4_128_MD5,
0
};
#endif
static void ssls_error(int c)
{
(void) c;
//char err_buf[72];
//error_strerror(c, err_buf, sizeof(err_buf));
//msg->warn("[ssls] %s", err_buf);
}
/* Modify the events flags for a registered file descriptor */
static int ssls_event_handler(int efd, int fd, int ctrl, int mode)
{
int ret;
struct epoll_event event = {0, {0}};
event.data.fd = fd;
event.events = EPOLLERR | EPOLLHUP | EPOLLRDHUP | mode;
event.events |= EPOLLET;
/* Add to epoll queue */
ret = epoll_ctl(efd, ctrl, fd, &event);
if (ret < 0) {
perror("epoll_ctl");
return -1;
}
return 0;
}
int ssls_write(ssls_conn_t *conn, unsigned char *buf, int size)
{
return ssl_write(&conn->ssl_ctx, buf, size);
}
/* Change a file descriptor mode */
int ssls_event_mod(int efd, int fd, int mode)
{
int r;
r = ssls_event_handler(efd, fd, EPOLL_CTL_MOD, mode);
printf("event mod %i\n", r);
return r;
}
/* Register a given file descriptor into the epoll queue */
int ssls_event_add(int efd, int fd)
{
return ssls_event_handler(efd, fd, EPOLL_CTL_ADD, EPOLLIN);
}
/* Remove an epoll file descriptor from the queue */
int ssls_event_del(int efd, int fd)
{
return epoll_ctl(efd, EPOLL_CTL_DEL, fd, NULL);
}
static int ssls_create_socket(int domain, int type, int protocol)
{
return socket(domain, type, protocol);
}
static int ssls_socket_bind(int socket_fd, const struct sockaddr *addr,
socklen_t addrlen, int backlog)
{
int ret;
ret = bind(socket_fd, addr, addrlen);
if( ret == -1 ) {
mk_warn("Error binding socket");
return ret;
}
ret = listen(socket_fd, backlog);
if(ret == -1 ) {
mk_warn("Error setting up the listener");
return -1;
}
return ret;
}
int ssls_load_dh_param(ssls_ctx_t *ctx, char *dh_file)
{
char err_buf[72];
int ret;
ret = x509parse_dhmfile(&ctx->dhm, dh_file);
if (ret < 0) {
error_strerror(ret, err_buf, sizeof(err_buf));
msg->warn("[ssls] Load DH param file '%s' failed: %s",
dh_file,
err_buf);
return -1;
}
return 0;
}
int ssls_load_ca_root_cert(ssls_ctx_t *ctx, char *cert_file)
{
char err_buf[72];
int ret;
ret = x509parse_crtfile(&ctx->cacert, cert_file);
if (ret) {
error_strerror(ret, err_buf, sizeof(err_buf));
msg->warn("[ssls] Load CA root '%s' failed: %s",
cert_file,
err_buf);
return -1;
}
return 0;
}
int ssls_load_cert(ssls_ctx_t *ctx, char *cert_file)
{
char err_buf[72];
int ret;
ret = x509parse_crtfile(&ctx->srvcert, cert_file);
if (ret) {
error_strerror(ret, err_buf, sizeof(err_buf));
msg->warn("[ssls] Load certificated '%s' failed: %s",
cert_file,
err_buf);
return -1;
}
return 0;
}
int ssls_load_key(ssls_ctx_t *ctx, char *key_file)
{
char err_buf[72];
int ret;
ret = x509parse_keyfile(&ctx->rsa, key_file, NULL);
if (ret < 0) {
error_strerror(ret, err_buf, sizeof(err_buf));
msg->warn("[ssls] Load key '%s' failed: %s",
key_file,
err_buf);
return -1;
}
return 0;
}
static void ssls_ssl_debug(void *ctx, int level, const char *str)
{
(void) ctx;
(void) level;
//if (level < POLAR_DEBUG_LEVEL) {
printf("[SSL] %s", str);
//}
}
/* Register and initialize new connection into the server context */
static ssls_conn_t *ssls_register_connection(ssls_ctx_t *ctx, int fd)
{
ssl_context *ssl;
ssls_conn_t *conn = NULL;
conn = monkey->mem_alloc(sizeof(ssls_conn_t));
if (!conn) {
msg->err("[SSLS] could not allocate memory for new connection");
return NULL;
}
conn->fd = fd;
ssl = &conn->ssl_ctx;
/* SSL library initialization */
ssl_init(ssl);
ssl_set_endpoint(ssl, SSL_IS_SERVER);
ssl_set_authmode(ssl, SSL_VERIFY_NONE);
ssl_set_rng(ssl, ctr_drbg_random, &ctx->ctr_drbg);
ssl_set_dbg(ssl, ssls_ssl_debug, 0);
#if (POLARSSL_VERSION_NUMBER < 0x01020000)
ssl_set_ciphersuites(ssl, ssls_ciphersuites);
ssl_set_session(ssl, 0, 0, &conn->session);
memset(&conn->session, 0, sizeof(&conn->session));
#endif
#ifdef POLARSSL_SSL_CACHE_C
ssl_set_session_cache(ssl,
ssl_cache_get, &ctx->cache,
ssl_cache_set, &ctx->cache);
#endif
ssl_set_ca_chain(ssl, &ctx->cacert, NULL, NULL);
ssl_set_own_cert(ssl, &ctx->srvcert, &ctx->rsa);
#if defined(POLARSSL_DHM_C)
/*
* Use different group than default DHM group
*/
ssl_set_dh_param( ssl, POLARSSL_DHM_RFC5114_MODP_2048_P,
POLARSSL_DHM_RFC5114_MODP_2048_G );
ssl_set_dh_param_ctx(ssl, &ctx->dhm);
#endif
ssl_set_bio(ssl, net_recv, &conn->fd, net_send, &conn->fd);
mk_list_add(&conn->_head, &ctx->conns);
return conn;
}
/* Lookup an active SSL connection */
static ssls_conn_t *ssls_get_connection(ssls_ctx_t *ctx, int fd)
{
ssls_conn_t *conn = NULL;
struct mk_list *head;
mk_list_foreach(head, &ctx->conns) {
conn = mk_list_entry(head, ssls_conn_t, _head);
if (conn->fd == fd) {
return conn;
}
}
return NULL;
}
/* Remove a SSL connection */
static int ssls_remove_connection(ssls_ctx_t *ctx, int fd)
{
ssls_conn_t *conn;
struct mk_list *head, *tmp;
mk_list_foreach_safe(head, tmp, &ctx->conns) {
conn = mk_list_entry(head, ssls_conn_t, _head);
if (conn->fd == fd) {
mk_list_del(&conn->_head);
ssl_free(&conn->ssl_ctx);
monkey->mem_free(conn);
return 0;
}
}
return -1;
}
/*
* It creates a TCP socket server. We do not use the server core APIs
* as we don't know which plugin is using as a layer
*/
int ssls_socket_server(int port, char *listen_addr)
{
int socket_fd = -1;
int ret;
char *port_str = 0;
unsigned long len;
struct addrinfo hints;
struct addrinfo *res, *rp;
memset(&hints, 0, sizeof hints);
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
monkey->str_build(&port_str, &len, "%d", port);
ret = getaddrinfo(listen_addr, port_str, &hints, &res);
monkey->mem_free(port_str);
if(ret != 0) {
mk_err("Can't get addr info: %s", gai_strerror(ret));
return -1;
}
for (rp = res; rp != NULL; rp = rp->ai_next) {
socket_fd = ssls_create_socket(rp->ai_family,
rp->ai_socktype,
rp->ai_protocol);
if( socket_fd == -1) {
mk_warn("Error creating server socket, retrying");
continue;
}
monkey->socket_set_tcp_nodelay(socket_fd);
monkey->socket_reset(socket_fd);
ret = ssls_socket_bind(socket_fd, rp->ai_addr, rp->ai_addrlen,
MK_SOMAXCONN);
if(ret == -1) {
mk_err("Cannot listen on %s:%i\n", listen_addr, port);
continue;
}
break;
}
freeaddrinfo(res);
if (rp == NULL)
return -1;
return socket_fd;
}
void ssls_set_callbacks(ssls_ctx_t *ctx,
void (*cb_accepted)(struct ssls_ctx *, ssls_conn_t *,
int),
void (*cb_read) (struct ssls_ctx *, ssls_conn_t *,
int, unsigned char *, int),
void (*cb_write) (struct ssls_ctx *, ssls_conn_t *, int),
void (*cb_close) (struct ssls_ctx *, ssls_conn_t *,
int),
void (*cb_timeout) (struct ssls_ctx *, ssls_conn_t *,
int))
{
ctx->cb_accepted = cb_accepted;
ctx->cb_read = cb_read;
ctx->cb_write = cb_write;
ctx->cb_close = cb_close;
ctx->cb_timeout = cb_timeout;
}
/*
* starts the server loop waiting for incoming connections, this function
* should never return.
*/
void ssls_server_loop(ssls_ctx_t *ctx)
{
int i;
int fd;
int ret;
int remote_fd;
int size;
int num_fds;
int max_events = 128;
int buf_size = 4096;
ssls_conn_t *conn;
struct sockaddr_un address;
socklen_t socket_size = sizeof(struct sockaddr_in);
/* per context we have a read-buffer that reads up to 4KB per round */
void *buf = monkey->mem_alloc(buf_size);
struct epoll_event *events;
/* validate the context */
if (!ctx || ctx->fd <= 0 || ctx->efd <=0) {
msg->err("[SSLS] Context not initialized properly. Aborting");
exit(EXIT_FAILURE);
}
/* events queue handler */
size = (max_events * sizeof(struct epoll_event));
events = (struct epoll_event *) malloc(size);
while (1) {
/* wait for events */
num_fds = epoll_wait(ctx->efd, events, max_events, -1);
for (i = 0; i < num_fds; i++) {
fd = events[i].data.fd;
if (events[i].events & EPOLLIN) {
/*
* Event on socket server: this means a new connection,
* the proper way to handle it is to accept the connection
* and perform a handshake before to get back the control to
* the events callbacks
*/
if (fd == ctx->fd) {
remote_fd = accept(ctx->fd, (struct sockaddr *) &address,
&socket_size);
if (remote_fd < 0) {
continue;
}
/* set the socket to non-blocking mode */
conn = ssls_register_connection(ctx, remote_fd);
monkey->socket_set_nonblocking(remote_fd);
ssls_event_add(ctx->efd, remote_fd);
printf("NEW: %i %p\n", remote_fd, conn);
/* Report the new connection through accepted callback */
if (ctx->cb_accepted) {
ctx->cb_accepted(ctx, conn, remote_fd);
}
}
else {
/*
* When a new connection arrives, we need to register this
* connection in our context, initialize some SSL stuff
* and as we are in a "ready for read" mode (EPOLLIN), means
* that we should start making the SSL handshake.
*/
conn = ssls_get_connection(ctx, fd);
/* if no connection node exists, means this file descriptor belongs
* to something else (someone is using our polling queue to hook
* events, so we just invoke the proper callback with the data
* that we have
*/
if (!conn) {
if (ctx->cb_read) {
ctx->cb_read(ctx, NULL, fd, NULL, -1);
}
continue;
}
ret = ssl_read(&conn->ssl_ctx, buf, buf_size);
if (ret == POLARSSL_ERR_NET_WANT_READ ||
ret == POLARSSL_ERR_NET_WANT_WRITE) {
printf("continue\n");
if (ret == POLARSSL_ERR_NET_WANT_READ){
printf("WANT read\n");
}
else if (ret == POLARSSL_ERR_NET_WANT_WRITE) {
printf("WANT write\n");
}
continue;
}
else if (ret == POLARSSL_ERR_SSL_CONN_EOF) {
printf("FIXME: exit\n");
exit(1);
}
if (ret > 0) {
/*
* we got some data in our buffer, lets invoke the
* READ callback and pass the data to it
*/
if (ctx->cb_read) {
ctx->cb_read(ctx, conn, fd, buf, buf_size);
}
}
else {
ssls_error(ret);
if (ctx->cb_close) {
ctx->cb_close(ctx, conn, fd);
}
ssls_remove_connection(ctx, fd);
close(fd);
}
}
}
else if (events[i].events & EPOLLOUT) {
printf("POLLOUT!\n");
if (ctx->cb_write) {
conn = ssls_get_connection(ctx, fd);
ctx->cb_write(ctx, conn, fd);
}
}
else {
printf("EOG!\n");
}
}
}
}
/* Initialize a context of SSL Server */
ssls_ctx_t *ssls_init(int port, char *listen_addr)
{
int fd;
int ret;
ssls_ctx_t *ctx;
/* create the context */
ctx = monkey->mem_alloc(sizeof(ssls_ctx_t));
if (!ctx) {
msg->err("[SSLS] Memory allocation failed. Aborting");
exit(EXIT_FAILURE);
}
memset(&ctx->cacert, 0, sizeof(x509_cert));
memset(&ctx->srvcert, 0, sizeof(x509_cert));
memset(&ctx->rsa, 0, sizeof(rsa_context));
memset(&ctx->dhm, 0, sizeof(dhm_context));
/* create a listener socket and bind the given address */
fd = ssls_socket_server(port, listen_addr);
if (fd == -1) {
monkey->mem_free(ctx);
return NULL;
}
ctx->fd = fd;
monkey->socket_set_nonblocking(ctx->fd);
/* create an epoll(7) queue */
ctx->efd = epoll_create(100);
if (ctx->efd == -1) {
msg->err("[SSLS] Cannot create epoll queue");
monkey->mem_free(ctx);
return NULL;
}
/* initialize head list for active connections */
mk_list_init(&ctx->conns);
/* Initialize PolarSSL internals */
#ifdef POLARSSL_SSL_CACHE_C
ssl_cache_init(&ctx->cache);
#endif
rsa_init(&ctx->rsa, RSA_PKCS_V15, 0);
entropy_init(&ctx->entropy);
ret = ctr_drbg_init(&ctx->ctr_drbg,
entropy_func, &ctx->entropy,
NULL, 0);
if (ret) {
msg->err("crt_drbg_init failed: %d", ret);
monkey->mem_free(ctx);
return NULL;
}
/* register the socket server into the events queue (default EPOLLIN) */
ssls_event_add(ctx->efd, ctx->fd);
return ctx;
}
| {
"pile_set_name": "Github"
} |
// impressivewebs.com/reverse-ordered-lists-html5
// polyfill: github.com/impressivewebs/HTML5-Reverse-Ordered-Lists
Modernizr.addTest('olreversed', 'reversed' in document.createElement('ol'));
| {
"pile_set_name": "Github"
} |
'use strict';
module.exports = Number.isNaN || function (x) {
return x !== x;
};
| {
"pile_set_name": "Github"
} |
"""The idlelib package implements the Idle application.
Idle includes an interactive shell and editor.
Use the files named idle.* to start Idle.
The other files are private implementations. Their details are subject
to change. See PEP 434 for more. Import them at your own risk.
"""
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>第 4 章 File Synchronize</title><link rel="stylesheet" type="text/css" href="..//docbook.css" /><meta name="generator" content="DocBook XSL Stylesheets V1.79.1" /><meta name="keywords" content="openfiler, freenas, proftpd,pureftpd,vsftpd, rsync,wget,samba" /><link rel="home" href="../index.html" title="Netkiller Linux Storage 手札" /><link rel="up" href="../index.html" title="Netkiller Linux Storage 手札" /><link rel="prev" href="../ftp/pureftpd.html" title="3.6. Pure-FTPd + LDAP + MySQL + PGSQL + Virtual-Users + Quota" /><link rel="next" href="tsync.html" title="4.2. tsync" /></head><body><a xmlns="" href="//www.netkiller.cn/">Home</a> |
<a xmlns="" href="//netkiller.github.io/">简体中文</a> |
<a xmlns="" href="http://netkiller.sourceforge.net/">繁体中文</a> |
<a xmlns="" href="/journal/index.html">杂文</a> |
<a xmlns="" href="//www.netkiller.cn/home/donations.html">打赏(Donations)</a> |
<a xmlns="" href="https://github.com/netkiller">Github</a> |
<a xmlns="" href="http://my.oschina.net/neochen/">OSChina 博客</a> |
<a xmlns="" href="https://cloud.tencent.com/developer/column/2078">云社区</a> |
<a xmlns="" href="https://yq.aliyun.com/u/netkiller/">云栖社区</a> |
<a xmlns="" href="https://www.facebook.com/bg7nyt">Facebook</a> |
<a xmlns="" href="http://cn.linkedin.com/in/netkiller/">Linkedin</a> |
<a xmlns="" href="https://zhuanlan.zhihu.com/netkiller">知乎专栏</a> |
<a xmlns="" href="//www.netkiller.cn/home/video.html">视频教程</a> |
<a xmlns="" href="//www.netkiller.cn/home/about.html">About</a><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">第 4 章 File Synchronize</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="../ftp/pureftpd.html">上一页</a> </td><th width="60%" align="center"> </th><td width="20%" align="right"> <a accesskey="n" href="tsync.html">下一页</a></td></tr></table><hr /></div><table xmlns=""><tr><td><iframe src="//ghbtns.com/github-btn.html?user=netkiller&repo=netkiller.github.io&type=watch&count=true&size=large" height="30" width="170" frameborder="0" scrolling="0" style="width:170px; height: 30px;" allowTransparency="true"></iframe></td><td><iframe src="//ghbtns.com/github-btn.html?user=netkiller&repo=netkiller.github.io&type=fork&count=true&size=large" height="30" width="170" frameborder="0" scrolling="0" style="width:170px; height: 30px;" allowTransparency="true"></iframe></td><td><iframe src="//ghbtns.com/github-btn.html?user=netkiller&type=follow&count=true&size=large" height="30" width="240" frameborder="0" scrolling="0" style="width:240px; height: 30px;" allowTransparency="true"></iframe></td></tr></table><div class="chapter"><div class="titlepage"><div><div><h1 class="title"><a id="index"></a>第 4 章 File Synchronize</h1></div></div></div><div class="toc"><p><strong>目录</strong></p><dl class="toc"><dt><span class="section"><a href="index.html#rsync">4.1. rsync - fast remote file copy program (like rcp)</a></span></dt><dd><dl><dt><span class="section"><a href="index.html#rsync.setup">4.1.1. 安装Rsync与配置守护进程</a></span></dt><dd><dl><dt><span class="section"><a href="index.html#source">4.1.1.1. install with source</a></span></dt><dt><span class="section"><a href="index.html#aptitude">4.1.1.2. install with aptitude</a></span></dt><dt><span class="section"><a href="index.html#rsync.xinetd">4.1.1.3. xinetd</a></span></dt><dt><span class="section"><a href="index.html#systemctl">4.1.1.4. CentOS 7 - systemctl</a></span></dt></dl></dd><dt><span class="section"><a href="index.html#rsyncd.conf">4.1.2. rsyncd.conf</a></span></dt><dt><span class="section"><a href="index.html#rsync.option">4.1.3. rsync 参数说明</a></span></dt><dd><dl><dt><span class="section"><a href="index.html#idp34">4.1.3.1. -n, --dry-run perform a trial run with no changes made</a></span></dt><dt><span class="section"><a href="index.html#idp35">4.1.3.2. --bwlimit=KBPS limit I/O bandwidth; KBytes per second</a></span></dt><dt><span class="section"><a href="index.html#idp36">4.1.3.3. -e, --rsh=COMMAND specify the remote shell to use</a></span></dt></dl></dd><dt><span class="section"><a href="index.html#example">4.1.4. step by step to learn rsync</a></span></dt><dt><span class="section"><a href="index.html#rsync.example">4.1.5. rsync examples</a></span></dt><dd><dl><dt><span class="section"><a href="index.html#idp37">4.1.5.1. upload</a></span></dt><dt><span class="section"><a href="index.html#idp38">4.1.5.2. download</a></span></dt><dt><span class="section"><a href="index.html#idp39">4.1.5.3. mirror</a></span></dt><dt><span class="section"><a href="index.html#idp40">4.1.5.4. rsync delete</a></span></dt><dt><span class="section"><a href="index.html#idp41">4.1.5.5. backup to a central backup server with 7 day incremental</a></span></dt><dt><span class="section"><a href="index.html#idp42">4.1.5.6. backup to a spare disk</a></span></dt><dt><span class="section"><a href="index.html#idp43">4.1.5.7. mirroring vger CVS tree</a></span></dt><dt><span class="section"><a href="index.html#idp44">4.1.5.8. automated backup at home</a></span></dt><dt><span class="section"><a href="index.html#idp45">4.1.5.9. Fancy footwork with remote file lists</a></span></dt></dl></dd><dt><span class="section"><a href="index.html#rsync.windows">4.1.6. rsync for windows</a></span></dt><dt><span class="section"><a href="index.html#rsync.sh">4.1.7. 多进程 rsync 脚本</a></span></dt></dl></dd><dt><span class="section"><a href="tsync.html">4.2. tsync</a></span></dt><dt><span class="section"><a href="lsyncd.html">4.3. lsyncd</a></span></dt><dd><dl><dt><span class="section"><a href="lsyncd.html#idp46">4.3.1. 安装</a></span></dt><dt><span class="section"><a href="lsyncd.html#idp51">4.3.2. 配置 lsyncd.conf</a></span></dt><dd><dl><dt><span class="section"><a href="lsyncd.html#idp50">4.3.2.1. lsyncd.conf 配置项说明</a></span></dt><dd><dl><dt><span class="section"><a href="lsyncd.html#idp47">4.3.2.1.1. settings 全局设置</a></span></dt><dt><span class="section"><a href="lsyncd.html#idp48">4.3.2.1.2. sync 定义同步参数</a></span></dt><dt><span class="section"><a href="lsyncd.html#idp49">4.3.2.1.3. rsync</a></span></dt></dl></dd></dl></dd><dt><span class="section"><a href="lsyncd.html#idp52">4.3.3. 配置演示</a></span></dt></dl></dd><dt><span class="section"><a href="unison.html">4.4. Unison File Synchronizer</a></span></dt><dd><dl><dt><span class="section"><a href="unison.html#idp53">4.4.1. local</a></span></dt><dt><span class="section"><a href="unison.html#idp54">4.4.2. remote</a></span></dt><dt><span class="section"><a href="unison.html#idp55">4.4.3. config</a></span></dt></dl></dd><dt><span class="section"><a href="csync2.html">4.5. csync2 - cluster synchronization tool</a></span></dt><dd><dl><dt><span class="section"><a href="csync2.html#idp56">4.5.1. server</a></span></dt><dt><span class="section"><a href="csync2.html#idp57">4.5.2. node</a></span></dt><dt><span class="section"><a href="csync2.html#idp58">4.5.3. test</a></span></dt><dt><span class="section"><a href="csync2.html#idp59">4.5.4. Advanced Configuration</a></span></dt><dt><span class="section"><a href="csync2.html#idp60">4.5.5. 编译安装</a></span></dt></dl></dd><dt><span class="section"><a href="synctool.html">4.6. synctool</a></span></dt></dl></div>
<div class="section"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a id="rsync"></a>4.1. rsync - fast remote file copy program (like rcp)</h2></div></div></div>
<p>rsync is an open source utility that provides fast incremental file transfer. rsync is freely available under the GNU General Public License version 2 and is currently being maintained by Wayne Davison.</p>
<div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="rsync.setup"></a>4.1.1. 安装Rsync与配置守护进程</h3></div></div></div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="source"></a>4.1.1.1. install with source</h4></div></div></div>
<div class="procedure"><a id="idp128"></a><p class="title"><strong>过程 4.1. rsync</strong></p><ol class="procedure" type="1"><li class="step"><p>安装rsync</p>
<p>在AS3 第二张CD上找到rsync-2.5.6-20.i386.rpm</p>
<pre class="screen">
[root@linuxas3 root]# cd /mnt
[root@linuxas3 mnt]# mount cdrom
[root@linuxas3 mnt]# cd cdrom/RedHat/RPMS
[root@linuxas3 RPMS]# rpm -ivh rsync-2.5.6-20.i386.rpm
</pre>
</li><li class="step"><p>配置/etc/rsyncd.conf</p>
<p>在rh9,as3系统上rsync安装后,并没有创建rsyncd.conf文档,要自己创建rsyncd.conf文档</p>
<pre class="screen">
[root@linuxas3 root]# vi /etc/rsyncd.conf
uid=nobody
gid=nobody
max connections=5
use chroot=no
log file=/var/log/rsyncd.log
pid file=/var/run/rsyncd.pid
lock file=/var/run/rsyncd.lock
#auth users=root
secrets file=/etc/rsyncd.passwd
[postfix]
path=/var/mail
comment = backup mail
ignore errors
read only = yes
list = no
auth users = postfix
[netkiller]
path=/home/netkiller/web
comment = backup 9812.net
ignore errors
read only = yes
list = no
auth users = netkiller
[pgsqldb]
path=/var/lib/pgsql
comment = backup postgresql database
ignore errors
read only = yes
list = no
</pre>
<ol type="a" class="substeps">
<li class="step"><p>选项说明</p>
<pre class="screen">
uid = nobody
gid = nobody
use chroot = no # 不使用chroot
max connections = 4 # 最大连接数为4
pid file = /var/run/rsyncd.pid #进程ID文件
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log # 日志记录文件
secrets file = /etc/rsyncd.pwd # 认证文件名,主要保存用户密码,权限建议设为600,所有者root
[module] # 这里是认证的模块名,在client端需要指定
path = /var/mail # 需要做镜像的目录
comment = backup xxxx # 注释
ignore errors # 可以忽略一些无关的IO错误
read only = yes # 只读
list = no # 不允许列文件
auth users = postfix # 认证的用户名,如果没有这行,则表明是匿名
[other]
path = /path/to...
comment = xxxxx
</pre>
</li>
<li class="step"><p>密码文件</p>
<p>在server端生成一个密码文件/etc/rsyncd.pwd</p>
<pre class="screen">
[root@linuxas3 root]# echo postfix:xxx >>/etc/rsyncd.pwd
[root@linuxas3 root]# echo netkiller:xxx >>/etc/rsyncd.pwd
[root@linuxas3 root]# chmod 600 /etc/rsyncd.pwd
</pre>
</li>
<li class="step"><p>启动rsync daemon</p>
<pre class="screen">
[root@linuxas3 root]# rsync --daemon
</pre>
</li>
</ol>
</li><li class="step">
<p>添加到启动文件</p>
<pre class="screen">
echo "rsync --daemon" >> /etc/rc.d/rc.local [ OK ]
</pre>
<p>cat /etc/rc.d/rc.local 确认一下</p>
</li><li class="step"><p>测试</p>
<pre class="screen">
[root@linux docbook]# rsync rsync://netkiller.8800.org/netkiller
[root@linux tmp]# rsync rsync://netkiller@netkiller.8800.org/netkiller
Password:
[chen@linux temp]$ rsync -vzrtopg --progress --delete postfix@netkiller.8800.org::postfix /tmp
Password:
</pre>
</li></ol></div>
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="aptitude"></a>4.1.1.2. install with aptitude</h4></div></div></div>
<div class="procedure"><a id="idp129"></a><p class="title"><strong>过程 4.2. installation setp by setp</strong></p><ol class="procedure" type="1"><li class="step">
<p>installation</p>
<pre class="screen">
$ sudo apt-get install rsync
</pre>
</li><li class="step">
<p>enable</p>
<pre class="screen">
$ sudo vim /etc/default/rsync
RSYNC_ENABLE=true
</pre>
</li><li class="step">
<p>config /etc/rsyncd.conf</p>
<pre class="screen">
$ sudo vim /etc/rsyncd.conf
uid=nobody
gid=nobody
max connections=5
use chroot=no
pid file=/var/run/rsyncd.pid
lock file=/var/run/rsyncd.lock
log file=/var/log/rsyncd.log
#auth users=root
secrets file=/etc/rsyncd.secrets
[neo]
path=/home/neo/www
comment = backup neo
ignore errors
read only = yes
list = no
auth users = neo
[netkiller]
path=/home/netkiller/public_html
comment = backup netkiller
ignore errors
read only = yes
list = no
auth users = netkiller
[mirror]
path=/var/www/netkiller.8800.org/html/
comment = mirror netkiller.8800.org
exclude = .svn
ignore errors
read only = yes
list = yes
[music]
path=/var/music
comment = backup music database
ignore errors
read only = yes
list = no
[pgsqldb]
path=/var/lib/pgsql
comment = backup postgresql database
ignore errors
read only = yes
list = no
auth users = neo,netkiller
</pre>
</li><li class="step">
<p>/etc/rsyncd.secrets</p>
<pre class="screen">
$ sudo vim /etc/rsyncd.secrets
neo:123456
netkiller:123456
</pre>
<p></p>
<pre class="screen">
$ sudo chmod 600 /etc/rsyncd.secrets
</pre>
</li><li class="step">
<p>start</p>
<pre class="screen">
$ sudo /etc/init.d/rsync start
</pre>
</li><li class="step">
<p>test</p>
<pre class="screen">
$ rsync -vzrtopg --progress --delete neo@localhost::neo /tmp/test1/
$ rsync -vzrtopg --progress --delete localhost::music /tmp/test2/
</pre>
</li><li class="step">
<p>firewall</p>
<pre class="screen">
$ sudo ufw allow rsync
</pre>
</li></ol></div>
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="rsync.xinetd"></a>4.1.1.3. xinetd</h4></div></div></div>
<p>CentOS 6 之前的版本可以使用 xinetd, CentOS 7 不建议使用</p>
<pre class="screen">
yum install xinetd
</pre>
<p>配置 /etc/xinetd.d/rsync</p>
<pre class="screen">
vim /etc/xinetd.d/rsync
# default: off
# description: The rsync server is a good addition to an ftp server, as it \
# allows crc checksumming etc.
service rsync
{
disable = yes
flags = IPv6
socket_type = stream
wait = no
user = root
server = /usr/bin/rsync
server_args = --daemon
log_on_failure += USERID
}
</pre>
<p>disable = yes 改为 disable = no</p>
<p># vim /etc/rsyncd.conf</p>
<pre class="screen">
chkconfig xinetd on
/etc/init.d/xinetd restart
</pre>
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="systemctl"></a>4.1.1.4. CentOS 7 - systemctl</h4></div></div></div>
<pre class="screen">
systemctl enable rsyncd
systemctl start rsyncd
systemctl restart rsyncd
systemctl stop rsyncd
</pre>
<p>启动配置项 /etc/sysconfig/rsyncd</p>
<pre class="screen">
# cat /etc/sysconfig/rsyncd
OPTIONS=""
</pre>
<p>启动脚本</p>
<pre class="screen">
# cat /usr/lib/systemd/system/rsyncd.service
[Unit]
Description=fast remote file copy program daemon
ConditionPathExists=/etc/rsyncd.conf
[Service]
EnvironmentFile=/etc/sysconfig/rsyncd
ExecStart=/usr/bin/rsync --daemon --no-detach "$OPTIONS"
[Install]
WantedBy=multi-user.target
</pre>
</div>
</div>
<div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="rsyncd.conf"></a>4.1.2. rsyncd.conf</h3></div></div></div>
<pre class="screen">
# Minimal configuration file for rsync daemon
# See rsync(1) and rsyncd.conf(5) man pages for help
# This line is required by the /etc/init.d/rsyncd script
pid file = /var/run/rsyncd.pid
port = 873
address = 192.168.1.171
#uid = nobody
#gid = nobody
uid = root
gid = root
use chroot = yes
read only = yes
#limit access to private LANs
hosts allow=192.168.1.0/255.255.255.0 10.0.1.0/255.255.255.0
hosts deny=*
max connections = 5
motd file = /etc/rsyncd/rsyncd.motd
#This will give you a separate log file
#log file = /var/log/rsync.log
#This will log every file transferred - up to 85,000+ per user, per sync
#transfer logging = yes
log format = %t %a %m %f %b
syslog facility = local3
timeout = 300
[home]
path = /home
list=yes
ignore errors
auth users = linux
secrets file = /etc/rsyncd/rsyncd.secrets
comment = linuxsir home
exclude = beinan/ samba/
[beinan]
path = /opt
list=no
ignore errors
comment = optdir
auth users = beinan
secrets file = /etc/rsyncd/rsyncd.secrets
[www]
path = /www/
ignore errors
read only = true
list = false
hosts allow = 172.16.1.1
hosts deny = 0.0.0.0/32
auth users = backup
secrets file = /etc/backserver.pas
[web_user1]
path = /home/web_user1/
ignore errors
read only = true
list = false
hosts allow = 202.99.11.121
hosts deny = 0.0.0.0/32
uid = web_user1
gid = web_user1
auth users = backup
secrets file = /etc/backserver.pas
[pub]
comment = Random things available for download
path = /path/to/my/public/share
read only = yes
list = yes
uid = nobody
gid = nobody
auth users = pub
secrets file = /etc/rsyncd.secrets
</pre>
</div>
<div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="rsync.option"></a>4.1.3. rsync 参数说明</h3></div></div></div>
<pre class="screen">
命令行选项
-v, --verbose 详细模式输出
-q, --quiet 精简输出模式
-c, --checksum 打开校验开关,强制对文件传输进行校验
-a, --archive 归档模式,表示以递归方式传输文件,并保持所有文件属性,等于-rlptgoD
-r, --recursive 对子目录以递归模式处理
-R, --relative 使用相对路径信息
-b, --backup 创建备份,也就是对于目的已经存在有同样的文件名时,将老的文件重新命名为~filename。可以使用--suffix选项来指定不同的备份文件前缀。
--backup-dir 将备份文件(如~filename)存放在在目录下。
-suffix=SUFFIX 定义备份文件前缀
-u, --update 仅仅进行更新,也就是跳过所有已经存在于DST,并且文件时间晚于要备份的文件。(不覆盖更新的文件)
-l, --links 保留软链结
-L, --copy-links 想对待常规文件一样处理软链结
--copy-unsafe-links 仅仅拷贝指向SRC路径目录树以外的链结
--safe-links 忽略指向SRC路径目录树以外的链结
-H, --hard-links 保留硬链结
-p, --perms 保持文件权限
-o, --owner 保持文件属主信息
-g, --group 保持文件属组信息
-D, --devices 保持设备文件信息
-t, --times 保持文件时间信息
-S, --sparse 对稀疏文件进行特殊处理以节省DST的空间
-n, --dry-run现实哪些文件将被传输
-W, --whole-file 拷贝文件,不进行增量检测
-x, --one-file-system 不要跨越文件系统边界
-B, --block-size=SIZE 检验算法使用的块尺寸,默认是700字节
-e, --rsh=COMMAND 指定使用rsh、ssh方式进行数据同步
--rsync-path=PATH 指定远程服务器上的rsync命令所在路径信息
-C, --cvs-exclude 使用和CVS一样的方法自动忽略文件,用来排除那些不希望传输的文件
--existing 仅仅更新那些已经存在于DST的文件,而不备份那些新创建的文件
--delete 删除那些DST中SRC没有的文件
--delete-excluded 同样删除接收端那些被该选项指定排除的文件
--delete-after 传输结束以后再删除
--ignore-errors 及时出现IO错误也进行删除
--max-delete=NUM 最多删除NUM个文件
--partial 保留那些因故没有完全传输的文件,以是加快随后的再次传输
--force 强制删除目录,即使不为空
--numeric-ids 不将数字的用户和组ID匹配为用户名和组名
--timeout=TIME IP超时时间,单位为秒
-I, --ignore-times 不跳过那些有同样的时间和长度的文件
--size-only 当决定是否要备份文件时,仅仅察看文件大小而不考虑文件时间
--modify-window=NUM 决定文件是否时间相同时使用的时间戳窗口,默认为0
-T --temp-dir=DIR 在DIR中创建临时文件
--compare-dest=DIR 同样比较DIR中的文件来决定是否需要备份
-P 等同于 --partial
--progress 显示备份过程
-z, --compress 对备份的文件在传输时进行压缩处理
--exclude=PATTERN 指定排除不需要传输的文件模式
--include=PATTERN 指定不排除而需要传输的文件模式
--exclude-from=FILE 排除FILE中指定模式的文件
--include-from=FILE 不排除FILE指定模式匹配的文件
--version 打印版本信息
--address 绑定到特定的地址
--config=FILE 指定其他的配置文件,不使用默认的rsyncd.conf文件
--port=PORT 指定其他的rsync服务端口
--blocking-io 对远程shell使用阻塞IO
-stats 给出某些文件的传输状态
--progress 在传输时现实传输过程
--log-format=formAT 指定日志文件格式
--password-file=FILE 从FILE中得到密码
--bwlimit=KBPS 限制I/O带宽,KBytes per second
-h, --help 显示帮助信息
</pre>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp34"></a>4.1.3.1. -n, --dry-run perform a trial run with no changes made</h4></div></div></div>
<p>模拟运行,显示日志,但不做复制操作。</p>
<pre class="screen">
rsync -anvzP /www/* root@172.16.0.1/www
</pre>
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp35"></a>4.1.3.2. --bwlimit=KBPS limit I/O bandwidth; KBytes per second</h4></div></div></div>
<p>速度限制,限制为 100k Bytes/s</p>
<pre class="screen">
rsync -auvzP--bwlimit=100 /www/* root@172.16.0.1/www
</pre>
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp36"></a>4.1.3.3. -e, --rsh=COMMAND specify the remote shell to use</h4></div></div></div>
<pre class="screen">
rsync -auzv --rsh=ssh root@202.130.101.33:/www/example.com/* /backup/example.com/
# --rsh=ssh 可以省略
rsync -auzv root@202.130.101.33:/www/example.com/* /backup/example.com/
</pre>
<p>如果需要特别参数,可以这样写,这里指定连接SSH的端口为20</p>
<pre class="screen">
rsync -auzv --rsh='ssh -p20' root@202.130.101.34:/www/example.com/* /backup/example.com/
</pre>
</div>
</div>
<div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="example"></a>4.1.4. step by step to learn rsync</h3></div></div></div>
<div class="procedure"><ol class="procedure" type="1"><li class="step">
<p>transfer file from src to dest directory</p>
<pre class="screen">
neo@netkiller:/tmp$ mkdir rsync
neo@netkiller:/tmp$ cd rsync/
neo@netkiller:/tmp/rsync$ ls
neo@netkiller:/tmp/rsync$ mkdir src dest
neo@netkiller:/tmp/rsync$ echo file1 > src/file1
neo@netkiller:/tmp/rsync$ echo file2 > src/file2
neo@netkiller:/tmp/rsync$ echo file3 > src/file3
</pre>
</li><li class="step">
<p>skipping directory</p>
<pre class="screen">
neo@netkiller:/tmp/rsync$ mkdir src/dir1
neo@netkiller:/tmp/rsync$ mkdir src/dir2
neo@netkiller:/tmp/rsync$ rsync src/* dest/
skipping directory src/dir1
skipping directory src/dir2
</pre>
</li><li class="step">
<p>recurse into directories</p>
<pre class="screen">
neo@netkiller:/tmp/rsync$ rsync -r src/* dest/
neo@netkiller:/tmp/rsync$ ls dest/
dir1 dir2 file1 file2 file3
</pre>
</li><li class="step">
<p>backup</p>
<pre class="screen">
neo@netkiller:/tmp/rsync$ rsync -r --backup --suffix=.2008-11-21 src/* dest/
neo@netkiller:/tmp/rsync$ ls dest/
dir1 dir2 file1 file1.2008-11-21 file2 file2.2008-11-21 file3 file3.2008-11-21
neo@netkiller:/tmp/rsync$
</pre>
<p>backup-dir</p>
<pre class="screen">
neo@netkiller:/tmp/rsync$ rsync -r --backup --suffix=.2008-11-21 --backup-dir mybackup src/* dest/
neo@netkiller:/tmp/rsync$ ls dest/
dir1 dir2 file1 file1.2008-11-21 file2 file2.2008-11-21 file3 file3.2008-11-21 mybackup
neo@netkiller:/tmp/rsync$ ls dest/mybackup/
file1.2008-11-21 file2.2008-11-21 file3.2008-11-21
</pre>
<p></p>
<pre class="screen">
rsync -r --backup --suffix=.2008-11-21 --backup-dir ../mybackup src/* dest/
neo@netkiller:/tmp/rsync$ ls
dest mybackup src
neo@netkiller:/tmp/rsync$ ls src/
dir1 dir2 file1 file2 file3
</pre>
</li><li class="step">
<p>update</p>
<pre class="screen">
neo@netkiller:/tmp/rsync$ rm -rf dest/*
neo@netkiller:/tmp/rsync$ rsync -r -u src/* dest/
neo@netkiller:/tmp/rsync$ echo netkiller>>src/file2
neo@netkiller:/tmp/rsync$ rsync -v -r -u src/* dest/
building file list ... done
file2
sent 166 bytes received 42 bytes 416.00 bytes/sec
total size is 38 speedup is 0.18
</pre>
<p>update by time and size</p>
<pre class="screen">
neo@netkiller:/tmp/rsync$ echo Hi>src/dir1/file1.1
neo@netkiller:/tmp/rsync$ rsync -v -r -u src/* dest/
building file list ... done
dir1/file1.1
sent 166 bytes received 42 bytes 416.00 bytes/sec
total size is 41 speedup is 0.20
</pre>
</li><li class="step">
<p>--archive</p>
<pre class="screen">
rsync -a src/ dest/
</pre>
</li><li class="step">
<p>--compress</p>
<pre class="screen">
rsync -a -z src/ dest/
</pre>
</li><li class="step">
<p>--delete</p>
<p>src</p>
<pre class="screen">
svn@netkiller:~$ ls src/
dir1 dir2 file1 file2 file3
</pre>
<p>dest</p>
<pre class="screen">
neo@netkiller:~$ rsync -v -u -a --delete -e ssh svnroot@127.0.0.1:/home/svnroot/src /tmp/dest
svnroot@127.0.0.1's password:
receiving file list ... done
created directory /tmp/dest
src/
src/file1
src/file2
src/file3
src/dir1/
src/dir2/
sent 104 bytes received 309 bytes 118.00 bytes/sec
total size is 0 speedup is 0.00
</pre>
<p>src</p>
<pre class="screen">
svn@netkiller:~$ rm -rf src/file2
svn@netkiller:~$ rm -rf src/dir2
</pre>
<p>dest</p>
<pre class="screen">
neo@netkiller:~$ rsync -v -u -a --delete -e ssh svnroot@127.0.0.1:/home/svnroot/src /tmp/dest
svnroot@127.0.0.1's password:
receiving file list ... done
deleting src/dir2/
deleting src/file2
src/
sent 26 bytes received 144 bytes 68.00 bytes/sec
total size is 0 speedup is 0.00
</pre>
</li></ol></div>
</div>
<div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="rsync.example"></a>4.1.5. rsync examples</h3></div></div></div>
<p><a class="ulink" href="http://samba.anu.edu.au/rsync/examples.html" target="_top">http://samba.anu.edu.au/rsync/examples.html</a></p>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp37"></a>4.1.5.1. upload</h4></div></div></div>
<pre class="screen">
$ rsync -v -u -a --delete --rsh=ssh --stats localfile username@hostname:/home/username/
</pre>
<p>for example:</p>
<p>I want to copy local workspace of eclipse directory to another computer.</p>
<pre class="screen">
$ rsync -v -u -a --delete --rsh=ssh --stats workspace neo@192.168.245.131:/home/neo/
</pre>
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp38"></a>4.1.5.2. download</h4></div></div></div>
<pre class="screen">
$ rsync -v -u -a --delete --rsh=ssh --stats neo@192.168.245.131:/home/neo/* /tmp/
</pre>
</div><div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp39"></a>4.1.5.3. mirror</h4></div></div></div>
<p>rsync使用方法</p>
<p>rsync rsync://认证用户@主机/模块</p>
<pre class="screen">
rsync -vzrtopg --progress --delete 认证用户@主机::模块 /mirror目录
</pre>
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp40"></a>4.1.5.4. rsync delete</h4></div></div></div>
<div class="example"><a id="idp113"></a><p class="title"><strong>例 4.1. examples</strong></p><div class="example-contents">
<p>用rsync删除目标目录</p>
<div class="literallayout"><p><br />
<br />
mkdir /root/blank<br />
rsync --delete-before -a -H -v --progress --stats /root/blank/ ./cache/<br />
<br />
</p></div>
</div></div><br class="example-break" />
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp41"></a>4.1.5.5. backup to a central backup server with 7 day incremental</h4></div></div></div>
<div class="example"><a id="idp114"></a><p class="title"><strong>例 4.2. backup to a central backup server with 7 day incremental</strong></p><div class="example-contents">
<pre class="screen">
#!/bin/sh
# This script does personal backups to a rsync backup server. You will end up
# with a 7 day rotating incremental backup. The incrementals will go
# into subdirectories named after the day of the week, and the current
# full backup goes into a directory called "current"
# tridge@linuxcare.com
# directory to backup
BDIR=/home/$USER
# excludes file - this contains a wildcard pattern per line of files to exclude
EXCLUDES=$HOME/cron/excludes
# the name of the backup machine
BSERVER=owl
# your password on the backup server
export RSYNC_PASSWORD=XXXXXX
########################################################################
BACKUPDIR=`date +%A`
OPTS="--force --ignore-errors --delete-excluded --exclude-from=$EXCLUDES
--delete --backup --backup-dir=/$BACKUPDIR -a"
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin
# the following line clears the last weeks incremental directory
[ -d $HOME/emptydir ] || mkdir $HOME/emptydir
rsync --delete -a $HOME/emptydir/ $BSERVER::$USER/$BACKUPDIR/
rmdir $HOME/emptydir
# now the actual transfer
rsync $OPTS $BDIR $BSERVER::$USER/current
</pre>
</div></div><br class="example-break" />
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp42"></a>4.1.5.6. backup to a spare disk</h4></div></div></div>
<div class="example"><a id="idp115"></a><p class="title"><strong>例 4.3. backup to a spare disk</strong></p><div class="example-contents">
<pre class="screen">
I do local backups on several of my machines using rsync. I have an
extra disk installed that can hold all the contents of the main
disk. I then have a nightly cron job that backs up the main disk to
the backup. This is the script I use on one of those machines.
#!/bin/sh
export PATH=/usr/local/bin:/usr/bin:/bin
LIST="rootfs usr data data2"
for d in $LIST; do
mount /backup/$d
rsync -ax --exclude fstab --delete /$d/ /backup/$d/
umount /backup/$d
done
DAY=`date "+%A"`
rsync -a --delete /usr/local/apache /data2/backups/$DAY
rsync -a --delete /data/solid /data2/backups/$DAY
The first part does the backup on the spare disk. The second part
backs up the critical parts to daily directories. I also backup the
critical parts using a rsync over ssh to a remote machine.
</pre>
</div></div><br class="example-break" />
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp43"></a>4.1.5.7. mirroring vger CVS tree</h4></div></div></div>
<div class="example"><a id="idp116"></a><p class="title"><strong>例 4.4. mirroring vger CVS tree</strong></p><div class="example-contents">
<pre class="screen">
The vger.rutgers.edu cvs tree is mirrored onto cvs.samba.org via
anonymous rsync using the following script.
#!/bin/bash
cd /var/www/cvs/vger/
PATH=/usr/local/bin:/usr/freeware/bin:/usr/bin:/bin
RUN=`lps x | grep rsync | grep -v grep | wc -l`
if [ "$RUN" -gt 0 ]; then
echo already running
exit 1
fi
rsync -az vger.rutgers.edu::cvs/CVSROOT/ChangeLog $HOME/ChangeLog
sum1=`sum $HOME/ChangeLog`
sum2=`sum /var/www/cvs/vger/CVSROOT/ChangeLog`
if [ "$sum1" = "$sum2" ]; then
echo nothing to do
exit 0
fi
rsync -az --delete --force vger.rutgers.edu::cvs/ /var/www/cvs/vger/
exit 0
Note in particular the initial rsync of the ChangeLog to determine if
anything has changed. This could be omitted but it would mean that the
rsyncd on vger would have to build a complete listing of the cvs area
at each run. As most of the time nothing will have changed I wanted to
save the time on vger by only doing a full rsync if the ChangeLog has
changed. This helped quite a lot because vger is low on memory and
generally quite heavily loaded, so doing a listing on such a large
tree every hour would have been excessive.
</pre>
</div></div><br class="example-break" />
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp44"></a>4.1.5.8. automated backup at home</h4></div></div></div>
<div class="example"><a id="idp117"></a><p class="title"><strong>例 4.5. automated backup at home</strong></p><div class="example-contents">
<pre class="screen">
I use rsync to backup my wifes home directory across a modem link each
night. The cron job looks like this
#!/bin/sh
cd ~susan
{
echo
date
dest=~/backup/`date +%A`
mkdir $dest.new
find . -xdev -type f \( -mtime 0 -or -mtime 1 \) -exec cp -aPv "{}"
$dest.new \;
cnt=`find $dest.new -type f | wc -l`
if [ $cnt -gt 0 ]; then
rm -rf $dest
mv $dest.new $dest
fi
rm -rf $dest.new
rsync -Cavze ssh . samba:backup
} >> ~/backup/backup.log 2>&1
note that most of this script isn't anything to do with rsync, it just
creates a daily backup of Susans work in a ~susan/backup/ directory so
she can retrieve any version from the last week. The last line does
the rsync of her directory across the modem link to the host
samba. Note that I am using the -C option which allows me to add
entries to .cvsignore for stuff that doesn't need to be backed up.
</pre>
</div></div><br class="example-break" />
</div>
<div class="section"><div class="titlepage"><div><div><h4 class="title"><a id="idp45"></a>4.1.5.9. Fancy footwork with remote file lists</h4></div></div></div>
<div class="example"><a id="idp118"></a><p class="title"><strong>例 4.6. Fancy footwork with remote file lists</strong></p><div class="example-contents">
<pre class="screen">
One little known feature of rsync is the fact that when run over a
remote shell (such as rsh or ssh) you can give any shell command as
the remote file list. The shell command is expanded by your remote
shell before rsync is called. For example, see if you can work out
what this does:
rsync -avR remote:'`find /home -name "*.[ch]"`' /tmp/
note that that is backquotes enclosed by quotes (some browsers don't
show that correctly).
</pre>
</div></div><br class="example-break" />
</div>
</div>
<div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="rsync.windows"></a>4.1.6. rsync for windows</h3></div></div></div>
<p>http://www.rsync.net/resources/howto/windows_rsync.html</p>
</div>
<div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="rsync.sh"></a>4.1.7. 多进程 rsync 脚本</h3></div></div></div>
<pre class="screen">
#!/usr/bin/perl
my $path = "/data"; #本地目录
my $ip="172.16.xxx.xxx"; #远程目录
my $maxchild=5; #同时并发的个数
open FILE,"ls $path|";
while()
{
chomp;
my $filename = $_;
my $i = 1;
while($i<=1){
my $un = `ps -ef |grep rsync|grep -v grep |grep avl|wc -l`;
$i =$i+1;
if( $un < $maxchild){
system("rsync -avl --size-only $path/$_ $ip:$path &") ;
}else{
sleep 5;
$i = 1;
}
}
}
</pre>
</div>
</div>
</div><div xmlns="" id="disqus_thread"></div><script xmlns="">
var disqus_config = function () {
this.page.url = "http://www.netkiller.cn"; // Replace PAGE_URL with your page's canonical URL variable
this.page.identifier = 'netkiller'; // Replace PAGE_IDENTIFIER with your page's unique identifier variable
};
(function() { // DON'T EDIT BELOW THIS LINE
var d = document, s = d.createElement('script');
s.src = '//netkiller.disqus.com/embed.js';
s.setAttribute('data-timestamp', +new Date());
(d.head || d.body).appendChild(s);
})();
</script><noscript xmlns="">Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript><br xmlns="" /><script xmlns="" type="text/javascript" id="clustrmaps" src="//cdn.clustrmaps.com/map_v2.js?u=r5HG&d=9mi5r_kkDC8uxG8HuY3p4-2qgeeVypAK9vMD-2P6BYM"></script><div class="navfooter"><hr /><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="../ftp/pureftpd.html">上一页</a> </td><td width="20%" align="center"> </td><td width="40%" align="right"> <a accesskey="n" href="tsync.html">下一页</a></td></tr><tr><td width="40%" align="left" valign="top">3.6. Pure-FTPd + LDAP + MySQL + PGSQL + Virtual-Users + Quota </td><td width="20%" align="center"><a accesskey="h" href="../index.html">起始页</a></td><td width="40%" align="right" valign="top"> 4.2. tsync</td></tr></table></div><script xmlns="">
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-11694057-1', 'auto');
ga('send', 'pageview');
</script><script xmlns="" async="async">
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "https://hm.baidu.com/hm.js?93967759a51cda79e49bf4e34d0b0f2c";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script><script xmlns="" async="async">
(function(){
var bp = document.createElement('script');
var curProtocol = window.location.protocol.split(':')[0];
if (curProtocol === 'https') {
bp.src = 'https://zz.bdstatic.com/linksubmit/push.js';
}
else {
bp.src = 'http://push.zhanzhang.baidu.com/push.js';
}
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(bp, s);
})();
</script><script xmlns="" type="text/javascript" src="/js/q.js" async="async"></script></body></html> | {
"pile_set_name": "Github"
} |
/// @ref ext_vector_integer
/// @file glm/ext/vector_integer.hpp
///
/// @see core (dependence)
/// @see ext_vector_integer (dependence)
///
/// @defgroup ext_vector_integer GLM_EXT_vector_integer
/// @ingroup ext
///
/// Include <glm/ext/vector_integer.hpp> to use the features of this extension.
#pragma once
// Dependencies
#include "../detail/setup.hpp"
#include "../detail/qualifier.hpp"
#include "../detail/_vectorize.hpp"
#include "../vector_relational.hpp"
#include "../common.hpp"
#include <limits>
#if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
# pragma message("GLM: GLM_EXT_vector_integer extension included")
#endif
namespace glm
{
/// @addtogroup ext_vector_integer
/// @{
/// Return true if the value is a power of two number.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, bool, Q> isPowerOfTwo(vec<L, T, Q> const& v);
/// Return the power of two number which value is just higher the input value,
/// round up to a power of two.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> nextPowerOfTwo(vec<L, T, Q> const& v);
/// Return the power of two number which value is just lower the input value,
/// round down to a power of two.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> prevPowerOfTwo(vec<L, T, Q> const& v);
/// Return true if the 'Value' is a multiple of 'Multiple'.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, bool, Q> isMultiple(vec<L, T, Q> const& v, T Multiple);
/// Return true if the 'Value' is a multiple of 'Multiple'.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, bool, Q> isMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
/// Higher multiple number of Source.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @param v Source values to which is applied the function
/// @param Multiple Must be a null or positive value
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> nextMultiple(vec<L, T, Q> const& v, T Multiple);
/// Higher multiple number of Source.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @param v Source values to which is applied the function
/// @param Multiple Must be a null or positive value
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> nextMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
/// Lower multiple number of Source.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @param v Source values to which is applied the function
/// @param Multiple Must be a null or positive value
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> prevMultiple(vec<L, T, Q> const& v, T Multiple);
/// Lower multiple number of Source.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Signed or unsigned integer scalar types.
/// @tparam Q Value from qualifier enum
///
/// @param v Source values to which is applied the function
/// @param Multiple Must be a null or positive value
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> prevMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
/// Returns the bit number of the Nth significant bit set to
/// 1 in the binary representation of value.
/// If value bitcount is less than the Nth significant bit, -1 will be returned.
///
/// @tparam L An integer between 1 and 4 included that qualify the dimension of the vector.
/// @tparam T Signed or unsigned integer scalar types.
///
/// @see ext_vector_integer
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, int, Q> findNSB(vec<L, T, Q> const& Source, vec<L, int, Q> SignificantBitCount);
/// @}
} //namespace glm
#include "vector_integer.inl"
| {
"pile_set_name": "Github"
} |
1 NEUVOSTO neuvosto N N N Nom Sg 2 attr _ _
2 EURATOMIN Euratom N N N Prop Gen Sg 3 attr _ _
3 HANKINTAKESKUKSEN hankinta#keskus N N N Gen Sg 4 attr _ _
4 PERUSSÄÄNTÖ perus#sääntö N N N Nom Sg 7 attr _ _
5 EUROOPAN Eurooppa N N N Prop Gen Sg 6 attr _ _
6 ATOMIENERGIAYHTEISÖN atomi#energia#yhteisö N N N Gen Sg 7 attr _ _
7 NEUVOSTO neuvosto N N N Nom Sg 0 main _ _
8 , , Punct Punct Punct _ _ _ _
9 joka joka Pron Pron Pron Rel Nom Sg 10 subj _ _
10 ottaa ottaa V V V Prs Act Sg3 7 mod _ _
11 huomioon huomio N N N Ill Sg 10 phrv _ _
12 perustamissopimuksen perustamis#sopimus N N N Gen Sg 14 attr _ _
13 54 54 Num Num Num Digit Nom Sg 14 attr _ _
14 artiklan artikla N N N Gen Sg 10 obj _ _
15 , , Punct Punct Punct 17 phrm _ _
16 ja ja CC CC CC 17 phrm _ _
17 ottaa ottaa V V V Prs Act Sg3 10 conjunct _ _
18 huomioon huomio N N N Ill Sg 17 phrv _ _
19 komission komissio N N N Gen Sg 20 subj _ _
20 ehdotuksen ehdotus N N N Gen Sg 17 obj _ _
21 , , Punct Punct Punct 23 phrm _ _
22 ON olla V V V Prs Act Sg3 23 aux _ _
23 PÄÄTTÄNYT päättää PrfPrc PrfPrc PrfPrc Act Pos Nom Sg 17 conjunct _ _
24 antaa antaa V V V Inf1 Lat 23 obj _ _
25 Euratomin Euratom N N N Prop Gen Sg 26 attr _ _
26 hankintakeskuksen hankinta#keskus N N N Gen Sg 27 attr _ _
27 perussäännön perus#sääntö N N N Gen Sg 24 obj _ _
28 seuraavasti seuraava Adv Adv Adv Pos Man 24 advl _ _
29 : : Punct Punct Punct _ _ _ _
1 1 1 Num Num Num Digit Nom Sg 2 attr _ _
2 artikla artikla N N N Nom Sg 3 attr _ _
3 Nimi nimi N N N Nom Sg 0 main _ _
4 ja ja CC CC CC 5 phrm _ _
5 tarkoitus tarkoitus N N N Nom Sg 3 conjunct _ _
| {
"pile_set_name": "Github"
} |
// Copyright 2013 Dolphin Emulator Project
// Licensed under GPLv2+
// Refer to the license.txt file included.
// Originally written by Sven Peter <sven@fail0verflow.com> for anergistic.
// Integrated into Mephisto/CTUv2 by Cody Brocious
#include "Ctu.h"
const char GDB_STUB_START = '$';
const char GDB_STUB_END = '#';
const char GDB_STUB_ACK = '+';
const char GDB_STUB_NACK = '-';
#ifndef SIGTRAP
const uint32_t SIGTRAP = 5;
#endif
#ifndef SIGTERM
const uint32_t SIGTERM = 15;
#endif
#ifndef MSG_WAITALL
const uint32_t MSG_WAITALL = 8;
#endif
// For sample XML files see the GDB source /gdb/features
// GDB also wants the l character at the start
// This XML defines what the registers are for this specific ARM device
static const char* target_xml =
R"(<?xml version="1.0"?>
<!DOCTYPE target SYSTEM "gdb-target.dtd">
<target version="1.0">
<feature name="org.gnu.gdb.aarch64.core">
<reg name="x0" bitsize="64"/>
<reg name="x1" bitsize="64"/>
<reg name="x2" bitsize="64"/>
<reg name="x3" bitsize="64"/>
<reg name="x4" bitsize="64"/>
<reg name="x5" bitsize="64"/>
<reg name="x6" bitsize="64"/>
<reg name="x7" bitsize="64"/>
<reg name="x8" bitsize="64"/>
<reg name="x9" bitsize="64"/>
<reg name="x10" bitsize="64"/>
<reg name="x11" bitsize="64"/>
<reg name="x12" bitsize="64"/>
<reg name="x13" bitsize="64"/>
<reg name="x14" bitsize="64"/>
<reg name="x15" bitsize="64"/>
<reg name="x16" bitsize="64"/>
<reg name="x17" bitsize="64"/>
<reg name="x18" bitsize="64"/>
<reg name="x19" bitsize="64"/>
<reg name="x20" bitsize="64"/>
<reg name="x21" bitsize="64"/>
<reg name="x22" bitsize="64"/>
<reg name="x23" bitsize="64"/>
<reg name="x24" bitsize="64"/>
<reg name="x25" bitsize="64"/>
<reg name="x26" bitsize="64"/>
<reg name="x27" bitsize="64"/>
<reg name="x28" bitsize="64"/>
<reg name="x29" bitsize="64"/>
<reg name="x30" bitsize="64"/>
<reg name="sp" bitsize="64" type="data_ptr"/>
<reg name="pc" bitsize="64" type="code_ptr"/>
<reg name="cpsr" bitsize="32"/>
</feature>
</target>)";
uint8_t hexCharToValue(uint8_t hex) {
if(hex >= '0' && hex <= '9')
return hex - '0';
else if(hex >= 'a' && hex <= 'f')
return hex - 'a' + 0xA;
else if(hex >= 'A' && hex <= 'F')
return hex - 'A' + 0xA;
LOG_ERROR(GdbStub, "Invalid nibble: %c (%02x)", hex, hex);
}
uint8_t nibbleToHex(uint8_t n) {
n &= 0xF;
if(n < 0xA)
return '0' + n;
else
return 'a' + n - 0xA;
}
uint64_t hexToInt(const uint8_t* src, size_t len) {
uint64_t output = 0;
while(len-- > 0) {
output = (output << 4) | hexCharToValue(src[0]);
src++;
}
return output;
}
void memToGdbHex(uint8_t* dest, const uint8_t* src, size_t len) {
while(len-- > 0) {
auto tmp = *src++;
*dest++ = nibbleToHex(tmp >> 4);
*dest++ = nibbleToHex(tmp);
}
}
void gdbHexToMem(uint8_t* dest, const uint8_t* src, size_t len) {
while(len-- > 0) {
*dest++ = (uint8_t) ((hexCharToValue(src[0]) << 4) | hexCharToValue(src[1]));
src += 2;
}
}
void intToGdbHex(uint8_t* dest, uint64_t v) {
for(auto i = 0; i < 16; i += 2) {
dest[i + 1] = nibbleToHex((uint8_t) (v >> (4 * i)));
dest[i] = nibbleToHex((uint8_t) (v >> (4 * (i + 1))));
}
}
uint64_t gdbHexToInt(const uint8_t* src) {
uint64_t output = 0;
for(int i = 0; i < 16; i += 2) {
output = (output << 4) | hexCharToValue(src[15 - i - 1]);
output = (output << 4) | hexCharToValue(src[15 - i]);
}
return output;
}
uint8_t calculateChecksum(const uint8_t* buffer, size_t length) {
return static_cast<uint8_t>(accumulate(buffer, buffer + length, 0, plus<uint8_t>()));
}
GdbStub::GdbStub(Ctu *_ctu) : ctu(_ctu) {
memoryBreak = false;
haltLoop = stepLoop = false;
enabled = false;
latestSignal = 0;
}
void GdbStub::enable(uint16_t port) {
LOG_INFO(GdbStub, "Starting GDB server on port %d...", port);
sockaddr_in saddr_server = {};
saddr_server.sin_family = AF_INET;
saddr_server.sin_port = htons(port);
saddr_server.sin_addr.s_addr = INADDR_ANY;
auto tmpsock = socket(PF_INET, SOCK_STREAM, 0);
if(tmpsock == -1)
LOG_ERROR(GdbStub, "Failed to create gdb socket");
auto reuse_enabled = 1;
if(setsockopt(tmpsock, SOL_SOCKET, SO_REUSEADDR, (const char*)&reuse_enabled, sizeof(reuse_enabled)) < 0)
LOG_ERROR(GdbStub, "Failed to set gdb socket option");
auto server_addr = reinterpret_cast<const sockaddr*>(&saddr_server);
socklen_t server_addrlen = sizeof(saddr_server);
if(bind(tmpsock, server_addr, server_addrlen) < 0)
LOG_ERROR(GdbStub, "Failed to bind gdb socket");
if(listen(tmpsock, 1) < 0)
LOG_ERROR(GdbStub, "Failed to listen to gdb socket");
LOG_INFO(GdbStub, "Waiting for gdb to connect...");
sockaddr_in saddr_client;
sockaddr* client_addr = reinterpret_cast<sockaddr*>(&saddr_client);
socklen_t client_addrlen = sizeof(saddr_client);
client = accept(tmpsock, client_addr, &client_addrlen);
if(client < 0)
LOG_ERROR(GdbStub, "Failed to accept gdb client");
else
LOG_INFO(GdbStub, "Client connected.");
enabled = true;
haltLoop = true;
remoteBreak = false;
}
uint8_t GdbStub::readByte() {
uint8_t c;
auto size = recv(client, reinterpret_cast<char*>(&c), 1, MSG_WAITALL);
if(size != 1)
LOG_ERROR(GdbStub, "recv failed : %ld", size);
return c;
}
guint GdbStub::reg(int x) {
auto thread = ctu->tm.current();
if(thread == nullptr)
thread = ctu->tm.last();
if(thread == nullptr)
return 0;
switch(x) {
case 31:
return thread->regs.SP;
case 32:
return thread->regs.PC;
default:
assert(x < 31);
return thread->regs.gprs[x];
}
}
void GdbStub::reg(int x, guint v) {
auto thread = ctu->tm.current();
if(thread == nullptr)
thread = ctu->tm.last();
if(thread == nullptr)
return;
switch(x) {
case 31:
thread->regs.SP = v;
break;
case 32:
thread->regs.PC = v;
break;
default:
assert(x < 31);
thread->regs.gprs[x] = v;
}
}
auto& GdbStub::getBreakpointList(BreakpointType type) {
switch(type) {
case BreakpointType::Execute:
return breakpointsExecute;
case BreakpointType::Write:
return breakpointsWrite;
case BreakpointType::Read:
case BreakpointType::Access:
case BreakpointType::None: // Should never happen
return breakpointsRead;
}
}
void GdbStub::removeBreakpoint(BreakpointType type, gptr addr) {
auto& p = getBreakpointList(type);
auto bp = p.find(addr);
if(bp != p.end()) {
LOG_DEBUG(GdbStub, "gdb: removed a breakpoint: " ADDRFMT " bytes at " ADDRFMT " of type %d",
bp->second.len, bp->second.addr, type);
ctu->cpu.removeBreakpoint(bp->second.hook);
p.erase(addr);
}
}
auto GdbStub::getNextBreakpointFromAddress(gptr addr, BreakpointType type) {
auto& p = getBreakpointList(type);
auto next_breakpoint = p.lower_bound(addr);
BreakpointAddress breakpoint;
if(next_breakpoint != p.end()) {
breakpoint.address = next_breakpoint->first;
breakpoint.type = type;
} else {
breakpoint.address = 0;
breakpoint.type = BreakpointType::None;
}
return breakpoint;
}
bool GdbStub::checkBreakpoint(gptr addr, BreakpointType type) {
auto& p = getBreakpointList(type);
auto bp = p.find(addr);
if(bp != p.end()) {
guint len = bp->second.len;
if(bp->second.active && (addr >= bp->second.addr && addr < bp->second.addr + len)) {
LOG_DEBUG(GdbStub,
"Found breakpoint type %d @ " ADDRFMT ", range: " ADDRFMT " - " ADDRFMT " (%d bytes)", type,
addr, bp->second.addr, bp->second.addr + len, (uint32_t) len);
return true;
}
}
return false;
}
void GdbStub::sendPacket(const char packet) {
if(send(client, &packet, 1, 0) != 1)
LOG_ERROR(GdbStub, "send failed");
}
void GdbStub::sendReply(const char* reply) {
LOG_DEBUG(GdbStub, "Reply: %s", reply);
memset(commandBuffer, 0, sizeof(commandBuffer));
commandLength = static_cast<uint32_t>(strlen(reply));
if(commandLength + 4 > sizeof(commandBuffer)) {
LOG_DEBUG(GdbStub, "commandBuffer overflow in sendReply");
return;
}
memcpy(commandBuffer + 1, reply, commandLength);
auto checksum = calculateChecksum(commandBuffer, commandLength + 1);
commandBuffer[0] = GDB_STUB_START;
commandBuffer[commandLength + 1] = GDB_STUB_END;
commandBuffer[commandLength + 2] = nibbleToHex(checksum >> 4);
commandBuffer[commandLength + 3] = nibbleToHex(checksum);
auto ptr = commandBuffer;
auto left = commandLength + 4;
while(left > 0) {
auto sent_size = send(client, reinterpret_cast<char*>(ptr), left, 0);
if(sent_size < 0)
LOG_ERROR(GdbStub, "gdb: send failed");
left -= sent_size;
ptr += sent_size;
}
}
void GdbStub::handleQuery() {
LOG_DEBUG(GdbStub, "gdb: query '%s'", commandBuffer + 1);
auto query = reinterpret_cast<const char*>(commandBuffer + 1);
if(strcmp(query, "TStatus") == 0)
sendReply("T0");
else if(strncmp(query, "Supported", strlen("Supported")) == 0)
sendReply("PacketSize=1600");
else if(strncmp(query, "Xfer:features:read:target.xml:",
strlen("Xfer:features:read:target.xml:")) == 0)
sendReply(target_xml);
else if (strncmp(query, "fThreadInfo", strlen("fThreadInfo")) == 0) {
auto list = ctu->tm.thread_list();
char tmp[17] = {0};
string val = "m";
for (auto it = list.begin(); it != list.end(); it++) {
if (!(*it)->started)
continue;
memset(tmp, 0, sizeof(tmp));
sprintf(tmp, "%x", (*it)->id);
val += (char*)tmp;
val += ",";
}
val.pop_back();
sendReply(val.c_str());
} else if (strncmp(query, "sThreadInfo", strlen("sThreadInfo")) == 0)
sendReply("l");
else
sendReply("");
}
void GdbStub::handleSetThread() {
// TODO: allow actually changing threads now :|
if(memcmp(commandBuffer, "Hg", 2) == 0 || memcmp(commandBuffer, "Hc", 2) == 0) {
// Get thread id
if (commandBuffer[2] != '-') {
int threadid = (int)hexToInt(commandBuffer + 2, strlen((char*)commandBuffer + 2));
ctu->tm.setCurrent(threadid);
}
sendReply("OK");
} else
sendReply("E01");
}
auto stringFromFormat(const char* format, ...) {
char *buf;
va_list args;
va_start(args, format);
if(vasprintf(&buf, format, args) < 0)
LOG_ERROR(GdbStub, "Unable to allocate memory for string");
va_end(args);
string ret = buf;
free(buf);
return ret;
}
void GdbStub::sendSignal(uint32_t signal) {
latestSignal = signal;
uint8_t sp[16];
uint8_t pc[16];
intToGdbHex(sp, reg(31));
intToGdbHex(pc, reg(32));
string buffer = stringFromFormat("T%02x%02x:%.16s;%02x:%.16s;", latestSignal, 32, pc, 31, sp);
auto curthread = ctu->tm.current();
if(curthread == nullptr)
curthread = ctu->tm.last();
if (curthread != nullptr)
buffer += stringFromFormat("thread:%x;", curthread->id);
LOG_DEBUG(GdbStub, "Response: %s", buffer.c_str());
sendReply(buffer.c_str());
}
void GdbStub::readCommand() {
commandLength = 0;
memset(commandBuffer, 0, sizeof(commandBuffer));
uint8_t c = readByte();
if(c == '+') {
// ignore ack
return;
} else if(c == 0x03) {
LOG_INFO(GdbStub, "gdb: found break command");
haltLoop = true;
remoteBreak = true;
return;
} else if(c != GDB_STUB_START) {
LOG_DEBUG(GdbStub, "gdb: read invalid byte %02x", c);
return;
}
while((c = readByte()) != GDB_STUB_END) {
if(commandLength >= sizeof(commandBuffer)) {
LOG_ERROR(GdbStub, "gdb: commandBuffer overflow");
sendPacket(GDB_STUB_NACK);
return;
}
commandBuffer[commandLength++] = c;
}
auto checksum_received = hexCharToValue(readByte()) << 4;
checksum_received |= hexCharToValue(readByte());
auto checksum_calculated = calculateChecksum(commandBuffer, commandLength);
if(checksum_received != checksum_calculated)
LOG_ERROR(GdbStub,
"gdb: invalid checksum: calculated %02x and read %02x for $%s# (length: %d)",
checksum_calculated, checksum_received, commandBuffer, commandLength);
sendPacket(GDB_STUB_ACK);
}
bool GdbStub::isDataAvailable() {
fd_set fd_socket;
FD_ZERO(&fd_socket);
FD_SET(client, &fd_socket);
struct timeval t;
t.tv_sec = 0;
t.tv_usec = 0;
if(select(client + 1, &fd_socket, nullptr, nullptr, &t) < 0) {
LOG_ERROR(GdbStub, "select failed");
return false;
}
return FD_ISSET(client, &fd_socket) != 0;
}
void GdbStub::readRegister() {
uint8_t reply[64];
memset(reply, 0, sizeof(reply));
uint32_t id = hexCharToValue(commandBuffer[1]);
if(commandBuffer[2] != '\0') {
id <<= 4;
id |= hexCharToValue(commandBuffer[2]);
}
if(id <= 32)
intToGdbHex(reply, reg(id));
else if(id == 33)
memset(reply, '0', 8);
else
return sendReply("E01");
sendReply(reinterpret_cast<char*>(reply));
}
void GdbStub::readRegisters() {
uint8_t buffer[GDB_BUFFER_SIZE - 4 + 1];
memset(buffer, 0, sizeof(buffer));
uint8_t* bufptr = buffer;
for(int i = 0; i <= 32; i++) {
intToGdbHex(bufptr + i * 16, reg(i));
}
bufptr += (33 * 16);
memset(bufptr, '0', 8);
bufptr[8] = '\0';
sendReply(reinterpret_cast<char*>(buffer));
}
void GdbStub::writeRegister() {
const uint8_t* buffer_ptr = commandBuffer + 3;
uint32_t id = hexCharToValue(commandBuffer[1]);
if(commandBuffer[2] != '=') {
++buffer_ptr;
id <<= 4;
id |= hexCharToValue(commandBuffer[2]);
}
auto val = gdbHexToInt(buffer_ptr);
if(id <= 32)
reg(id, val);
else if(id == 33) {
}
else
return sendReply("E01");
sendReply("OK");
}
void GdbStub::writeRegisters() {
const uint8_t* buffer_ptr = commandBuffer + 1;
if(commandBuffer[0] != 'G')
return sendReply("E01");
for(auto i = 0; i < 33; ++i)
if(i <= 32)
reg(i, gdbHexToInt(buffer_ptr + i * 16));
sendReply("OK");
}
void GdbStub::readMemory() {
uint8_t reply[GDB_BUFFER_SIZE - 4];
auto start_offset = commandBuffer + 1;
auto addr_pos = find(start_offset, commandBuffer + commandLength, ',');
auto addr = hexToInt(start_offset, static_cast<uint32_t>(addr_pos - start_offset));
start_offset = addr_pos + 1;
auto len = hexToInt(start_offset, static_cast<uint32_t>((commandBuffer + commandLength) - start_offset));
LOG_DEBUG(GdbStub, "gdb: addr: " ADDRFMT " len: " ADDRFMT, addr, len);
if(len * 2 > sizeof(reply)) {
sendReply("E01");
}
auto data = new uint8_t[len];
if(ctu->cpu.readmem(addr, data, len)) {
memToGdbHex(reply, data, len);
reply[len * 2] = '\0';
sendReply(reinterpret_cast<char*>(reply));
} else
sendReply("E00");
delete[] data;
}
void GdbStub::writeMemory() {
auto start_offset = commandBuffer + 1;
auto addr_pos = find(start_offset, commandBuffer + commandLength, ',');
gptr addr = hexToInt(start_offset, static_cast<uint32_t>(addr_pos - start_offset));
start_offset = addr_pos + 1;
auto len_pos = find(start_offset, commandBuffer + commandLength, ':');
auto len = hexToInt(start_offset, static_cast<uint32_t>(len_pos - start_offset));
auto dst = new uint8_t[len];
gdbHexToMem(dst, len_pos + 1, len);
if(ctu->cpu.writemem(addr, dst, len))
sendReply("OK");
else
sendReply("E00");
delete[] dst;
}
void GdbStub::_break(bool is_memoryBreak) {
if(!haltLoop) {
haltLoop = true;
sendSignal(SIGTRAP);
}
memoryBreak = is_memoryBreak;
}
void GdbStub::step() {
stepLoop = true;
haltLoop = true;
}
void GdbStub::_continue() {
memoryBreak = false;
stepLoop = false;
haltLoop = false;
}
bool GdbStub::commitBreakpoint(BreakpointType type, gptr addr, uint32_t len) {
auto& p = getBreakpointList(type);
Breakpoint breakpoint;
breakpoint.active = true;
breakpoint.addr = addr;
breakpoint.len = len;
if(type == BreakpointType::Execute)
breakpoint.hook = ctu->cpu.addCodeBreakpoint(addr);
else
breakpoint.hook = ctu->cpu.addMemoryBreakpoint(addr, len, type);
p.insert({addr, breakpoint});
LOG_DEBUG(GdbStub, "gdb: added %d breakpoint: " ADDRFMT " bytes at " ADDRFMT, type, breakpoint.len,
breakpoint.addr);
return true;
}
void GdbStub::addBreakpoint() {
BreakpointType type;
uint8_t type_id = hexCharToValue(commandBuffer[1]);
switch (type_id) {
case 0:
case 1:
type = BreakpointType::Execute;
break;
case 2:
type = BreakpointType::Write;
break;
case 3:
type = BreakpointType::Read;
break;
case 4:
type = BreakpointType::Access;
break;
default:
return sendReply("E01");
}
auto start_offset = commandBuffer + 3;
auto addr_pos = find(start_offset, commandBuffer + commandLength, ',');
gptr addr = hexToInt(start_offset, static_cast<uint32_t>(addr_pos - start_offset));
start_offset = addr_pos + 1;
auto len = (uint32_t) hexToInt(start_offset, static_cast<uint32_t>((commandBuffer + commandLength) - start_offset));
if(type == BreakpointType::Access) {
type = BreakpointType::Read;
if(!commitBreakpoint(type, addr, len)) {
return sendReply("E02");
}
type = BreakpointType::Write;
}
if(!commitBreakpoint(type, addr, len)) {
return sendReply("E02");
}
sendReply("OK");
}
void GdbStub::removeBreakpoint() {
BreakpointType type;
uint8_t type_id = hexCharToValue(commandBuffer[1]);
switch (type_id) {
case 0:
case 1:
type = BreakpointType::Execute;
break;
case 2:
type = BreakpointType::Write;
break;
case 3:
type = BreakpointType::Read;
break;
case 4:
type = BreakpointType::Access;
break;
default:
return sendReply("E01");
}
auto start_offset = commandBuffer + 3;
auto addr_pos = find(start_offset, commandBuffer + commandLength, ',');
gptr addr = hexToInt(start_offset, static_cast<uint32_t>(addr_pos - start_offset));
if(type == BreakpointType::Access) {
type = BreakpointType::Read;
removeBreakpoint(type, addr);
type = BreakpointType::Write;
}
removeBreakpoint(type, addr);
sendReply("OK");
}
void GdbStub::isThreadAlive() {
int threadid = (int)hexToInt(commandBuffer + 1, strlen((char*)commandBuffer + 1));
auto threads = ctu->tm.thread_list();
for (auto it = threads.begin(); it != threads.end(); it++) {
if ((*it)->id == threadid) {
sendReply("OK");
return;
}
}
sendReply("E01");
}
void GdbStub::handlePacket() {
if(!isDataAvailable())
return;
readCommand();
if(commandLength == 0)
return;
LOG_DEBUG(GdbStub, "Packet: %s", commandBuffer);
switch(commandBuffer[0]) {
case 'q':
handleQuery();
break;
case 'H':
handleSetThread();
break;
case '?':
sendSignal(latestSignal);
break;
case 'k':
LOG_ERROR(GdbStub, "killed by gdb");
case 'g':
readRegisters();
break;
case 'G':
writeRegisters();
break;
case 'p':
readRegister();
break;
case 'P':
writeRegister();
break;
case 'm':
readMemory();
break;
case 'M':
writeMemory();
break;
case 's':
step();
return;
case 'C':
case 'c':
_continue();
return;
case 'z':
removeBreakpoint();
break;
case 'T':
isThreadAlive();
break;
case 'Z':
addBreakpoint();
break;
default:
sendReply("");
break;
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package jdk.jfr.internal;
import java.util.function.Supplier;
/**
* JFR logger
*
*/
public final class Logger {
private final static int MAX_SIZE = 10000;
public static void log(LogTag logTag, LogLevel logLevel, String message) {
if (logTag.shouldLog(logLevel.level)) {
logInternal(logTag, logLevel, message);
}
}
public static void log(LogTag logTag, LogLevel logLevel, Supplier<String> messageSupplier) {
if (logTag.shouldLog(logLevel.level)) {
logInternal(logTag, logLevel, messageSupplier.get());
}
}
private static void logInternal(LogTag logTag, LogLevel logLevel, String message) {
if (message == null || message.length() < MAX_SIZE) {
JVM.log(logTag.id, logLevel.level, message);
} else {
JVM.log(logTag.id, logLevel.level, message.substring(0, MAX_SIZE));
}
}
}
| {
"pile_set_name": "Github"
} |
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.io;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.Optional;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.PrivateCellUtil;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.io.hfile.CacheConfig;
import org.apache.hadoop.hbase.io.hfile.HFileInfo;
import org.apache.hadoop.hbase.io.hfile.HFileScanner;
import org.apache.hadoop.hbase.io.hfile.ReaderContext;
import org.apache.hadoop.hbase.regionserver.StoreFileReader;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* A facade for a {@link org.apache.hadoop.hbase.io.hfile.HFile.Reader} that serves up
* either the top or bottom half of a HFile where 'bottom' is the first half
* of the file containing the keys that sort lowest and 'top' is the second half
* of the file with keys that sort greater than those of the bottom half.
* The top includes the split files midkey, of the key that follows if it does
* not exist in the file.
*
* <p>This type works in tandem with the {@link Reference} type. This class
* is used reading while Reference is used writing.
*
* <p>This file is not splitable. Calls to {@link #midKey()} return null.
*/
@InterfaceAudience.Private
public class HalfStoreFileReader extends StoreFileReader {
private static final Logger LOG = LoggerFactory.getLogger(HalfStoreFileReader.class);
final boolean top;
// This is the key we split around. Its the first possible entry on a row:
// i.e. empty column and a timestamp of LATEST_TIMESTAMP.
protected final byte [] splitkey;
private final Cell splitCell;
private Optional<Cell> firstKey = Optional.empty();
private boolean firstKeySeeked = false;
/**
* Creates a half file reader for a hfile referred to by an hfilelink.
* @param context Reader context info
* @param fileInfo HFile info
* @param cacheConf CacheConfig
* @param r original reference file (contains top or bottom)
* @param refCount reference count
* @param conf Configuration
*/
public HalfStoreFileReader(final ReaderContext context, final HFileInfo fileInfo,
final CacheConfig cacheConf, final Reference r,
AtomicInteger refCount, final Configuration conf) throws IOException {
super(context, fileInfo, cacheConf, refCount, conf);
// This is not actual midkey for this half-file; its just border
// around which we split top and bottom. Have to look in files to find
// actual last and first keys for bottom and top halves. Half-files don't
// have an actual midkey themselves. No midkey is how we indicate file is
// not splittable.
this.splitkey = r.getSplitKey();
this.splitCell = new KeyValue.KeyOnlyKeyValue(this.splitkey, 0, this.splitkey.length);
// Is it top or bottom half?
this.top = Reference.isTopFileRegion(r.getFileRegion());
}
protected boolean isTop() {
return this.top;
}
@Override
public HFileScanner getScanner(final boolean cacheBlocks,
final boolean pread, final boolean isCompaction) {
final HFileScanner s = super.getScanner(cacheBlocks, pread, isCompaction);
return new HFileScanner() {
final HFileScanner delegate = s;
public boolean atEnd = false;
@Override
public Cell getKey() {
if (atEnd) return null;
return delegate.getKey();
}
@Override
public String getKeyString() {
if (atEnd) return null;
return delegate.getKeyString();
}
@Override
public ByteBuffer getValue() {
if (atEnd) return null;
return delegate.getValue();
}
@Override
public String getValueString() {
if (atEnd) return null;
return delegate.getValueString();
}
@Override
public Cell getCell() {
if (atEnd) return null;
return delegate.getCell();
}
@Override
public boolean next() throws IOException {
if (atEnd) return false;
boolean b = delegate.next();
if (!b) {
return b;
}
// constrain the bottom.
if (!top) {
if (getComparator().compare(splitCell, getKey()) <= 0) {
atEnd = true;
return false;
}
}
return true;
}
@Override
public boolean seekTo() throws IOException {
if (top) {
int r = this.delegate.seekTo(splitCell);
if (r == HConstants.INDEX_KEY_MAGIC) {
return true;
}
if (r < 0) {
// midkey is < first key in file
return this.delegate.seekTo();
}
if (r > 0) {
return this.delegate.next();
}
return true;
}
boolean b = delegate.seekTo();
if (!b) {
return b;
}
// Check key.
return (this.delegate.getReader().getComparator().compare(splitCell, getKey())) > 0;
}
@Override
public org.apache.hadoop.hbase.io.hfile.HFile.Reader getReader() {
return this.delegate.getReader();
}
@Override
public boolean isSeeked() {
return this.delegate.isSeeked();
}
@Override
public int seekTo(Cell key) throws IOException {
if (top) {
if (PrivateCellUtil.compareKeyIgnoresMvcc(getComparator(), key, splitCell) < 0) {
return -1;
}
} else {
if (PrivateCellUtil.compareKeyIgnoresMvcc(getComparator(), key, splitCell) >= 0) {
// we would place the scanner in the second half.
// it might be an error to return false here ever...
boolean res = delegate.seekBefore(splitCell);
if (!res) {
throw new IOException(
"Seeking for a key in bottom of file, but key exists in top of file, " +
"failed on seekBefore(midkey)");
}
return 1;
}
}
return delegate.seekTo(key);
}
@Override
public int reseekTo(Cell key) throws IOException {
// This function is identical to the corresponding seekTo function
// except
// that we call reseekTo (and not seekTo) on the delegate.
if (top) {
if (PrivateCellUtil.compareKeyIgnoresMvcc(getComparator(), key, splitCell) < 0) {
return -1;
}
} else {
if (PrivateCellUtil.compareKeyIgnoresMvcc(getComparator(), key, splitCell) >= 0) {
// we would place the scanner in the second half.
// it might be an error to return false here ever...
boolean res = delegate.seekBefore(splitCell);
if (!res) {
throw new IOException("Seeking for a key in bottom of file, but"
+ " key exists in top of file, failed on seekBefore(midkey)");
}
return 1;
}
}
if (atEnd) {
// skip the 'reseek' and just return 1.
return 1;
}
return delegate.reseekTo(key);
}
@Override
public boolean seekBefore(Cell key) throws IOException {
if (top) {
Optional<Cell> fk = getFirstKey();
if (fk.isPresent() &&
PrivateCellUtil.compareKeyIgnoresMvcc(getComparator(), key, fk.get()) <= 0) {
return false;
}
} else {
// The equals sign isn't strictly necessary just here to be consistent
// with seekTo
if (PrivateCellUtil.compareKeyIgnoresMvcc(getComparator(), key, splitCell) >= 0) {
boolean ret = this.delegate.seekBefore(splitCell);
if (ret) {
atEnd = false;
}
return ret;
}
}
boolean ret = this.delegate.seekBefore(key);
if (ret) {
atEnd = false;
}
return ret;
}
@Override
public Cell getNextIndexedKey() {
return null;
}
@Override
public void close() {
this.delegate.close();
}
@Override
public void shipped() throws IOException {
this.delegate.shipped();
}
};
}
@Override
public boolean passesKeyRangeFilter(Scan scan) {
return true;
}
@Override
public Optional<Cell> getLastKey() {
if (top) {
return super.getLastKey();
}
// Get a scanner that caches the block and that uses pread.
HFileScanner scanner = getScanner(true, true);
try {
if (scanner.seekBefore(this.splitCell)) {
return Optional.ofNullable(scanner.getKey());
}
} catch (IOException e) {
LOG.warn("Failed seekBefore " + Bytes.toStringBinary(this.splitkey), e);
} finally {
if (scanner != null) {
scanner.close();
}
}
return Optional.empty();
}
@Override
public Optional<Cell> midKey() throws IOException {
// Returns null to indicate file is not splitable.
return Optional.empty();
}
@Override
public Optional<Cell> getFirstKey() {
if (!firstKeySeeked) {
HFileScanner scanner = getScanner(true, true, false);
try {
if (scanner.seekTo()) {
this.firstKey = Optional.ofNullable(scanner.getKey());
}
firstKeySeeked = true;
} catch (IOException e) {
LOG.warn("Failed seekTo first KV in the file", e);
} finally {
if(scanner != null) {
scanner.close();
}
}
}
return this.firstKey;
}
@Override
public long getEntries() {
// Estimate the number of entries as half the original file; this may be wildly inaccurate.
return super.getEntries() / 2;
}
@Override
public long getFilterEntries() {
// Estimate the number of entries as half the original file; this may be wildly inaccurate.
return super.getFilterEntries() / 2;
}
}
| {
"pile_set_name": "Github"
} |
#!./perl
#
# Copyright (c) 1995-2000, Raphael Manfredi
#
# You may redistribute only under the same terms as Perl 5, as specified
# in the README file that comes with the distribution.
#
sub BEGIN {
if ($ENV{PERL_CORE}){
chdir('t') if -d 't';
@INC = ('.', '../lib', '../ext/Storable/t');
} else {
unshift @INC, 't';
}
require Config; import Config;
if ($ENV{PERL_CORE} and $Config{'extensions'} !~ /\bStorable\b/) {
print "1..0 # Skip: Storable was not built\n";
exit 0;
}
require 'st-dump.pl';
}
sub ok;
use Storable qw(freeze thaw);
%::immortals
= (u => \undef,
'y' => \(1 == 1),
n => \(1 == 0)
);
my $test = 12;
my $tests = $test + 6 + 2 * 6 * keys %::immortals;
print "1..$tests\n";
package SHORT_NAME;
sub make { bless [], shift }
package SHORT_NAME_WITH_HOOK;
sub make { bless [], shift }
sub STORABLE_freeze {
my $self = shift;
return ("", $self);
}
sub STORABLE_thaw {
my $self = shift;
my $cloning = shift;
my ($x, $obj) = @_;
die "STORABLE_thaw" unless $obj eq $self;
}
package main;
# Still less than 256 bytes, so long classname logic not fully exercised
# Wait until Perl removes the restriction on identifier lengths.
my $name = "LONG_NAME_" . 'xxxxxxxxxxxxx::' x 14 . "final";
eval <<EOC;
package $name;
\@ISA = ("SHORT_NAME");
EOC
die $@ if $@;
ok 1, $@ eq '';
eval <<EOC;
package ${name}_WITH_HOOK;
\@ISA = ("SHORT_NAME_WITH_HOOK");
EOC
ok 2, $@ eq '';
# Construct a pool of objects
my @pool;
for (my $i = 0; $i < 10; $i++) {
push(@pool, SHORT_NAME->make);
push(@pool, SHORT_NAME_WITH_HOOK->make);
push(@pool, $name->make);
push(@pool, "${name}_WITH_HOOK"->make);
}
my $x = freeze \@pool;
ok 3, 1;
my $y = thaw $x;
ok 4, ref $y eq 'ARRAY';
ok 5, @{$y} == @pool;
ok 6, ref $y->[0] eq 'SHORT_NAME';
ok 7, ref $y->[1] eq 'SHORT_NAME_WITH_HOOK';
ok 8, ref $y->[2] eq $name;
ok 9, ref $y->[3] eq "${name}_WITH_HOOK";
my $good = 1;
for (my $i = 0; $i < 10; $i++) {
do { $good = 0; last } unless ref $y->[4*$i] eq 'SHORT_NAME';
do { $good = 0; last } unless ref $y->[4*$i+1] eq 'SHORT_NAME_WITH_HOOK';
do { $good = 0; last } unless ref $y->[4*$i+2] eq $name;
do { $good = 0; last } unless ref $y->[4*$i+3] eq "${name}_WITH_HOOK";
}
ok 10, $good;
{
my $blessed_ref = bless \\[1,2,3], 'Foobar';
my $x = freeze $blessed_ref;
my $y = thaw $x;
ok 11, ref $y eq 'Foobar';
ok 12, $$$y->[0] == 1;
}
package RETURNS_IMMORTALS;
sub make { my $self = shift; bless [@_], $self }
sub STORABLE_freeze {
# Some reference some number of times.
my $self = shift;
my ($what, $times) = @$self;
return ("$what$times", ($::immortals{$what}) x $times);
}
sub STORABLE_thaw {
my $self = shift;
my $cloning = shift;
my ($x, @refs) = @_;
my ($what, $times) = $x =~ /(.)(\d+)/;
die "'$x' didn't match" unless defined $times;
main::ok ++$test, @refs == $times;
my $expect = $::immortals{$what};
die "'$x' did not give a reference" unless ref $expect;
my $fail;
foreach (@refs) {
$fail++ if $_ != $expect;
}
main::ok ++$test, !$fail;
}
package main;
# $Storable::DEBUGME = 1;
my $count;
foreach $count (1..3) {
my $immortal;
foreach $immortal (keys %::immortals) {
print "# $immortal x $count\n";
my $i = RETURNS_IMMORTALS->make ($immortal, $count);
my $f = freeze ($i);
ok ++$test, $f;
my $t = thaw $f;
ok ++$test, 1;
}
}
# Test automatic require of packages to find thaw hook.
package HAS_HOOK;
$loaded_count = 0;
$thawed_count = 0;
sub make {
bless [];
}
sub STORABLE_freeze {
my $self = shift;
return '';
}
package main;
my $f = freeze (HAS_HOOK->make);
ok ++$test, $HAS_HOOK::loaded_count == 0;
ok ++$test, $HAS_HOOK::thawed_count == 0;
my $t = thaw $f;
ok ++$test, $HAS_HOOK::loaded_count == 1;
ok ++$test, $HAS_HOOK::thawed_count == 1;
ok ++$test, $t;
ok ++$test, ref $t eq 'HAS_HOOK';
# Can't do this because the method is still cached by UNIVERSAL::can
# delete $INC{"HAS_HOOK.pm"};
# undef &HAS_HOOK::STORABLE_thaw;
#
# warn HAS_HOOK->can('STORABLE_thaw');
# $t = thaw $f;
# ok ++$test, $HAS_HOOK::loaded_count == 2;
# ok ++$test, $HAS_HOOK::thawed_count == 2;
# ok ++$test, $t;
# ok ++$test, ref $t eq 'HAS_HOOK';
| {
"pile_set_name": "Github"
} |
/*=============================================================================
Copyright (c) 2001-2011 Joel de Guzman
Copyright (c) 2005-2006 Dan Marsden
Copyright (c) 2009-2010 Christopher Schmidt
Distributed under the Boost Software License, Version 1.0. (See accompanying
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
==============================================================================*/
#ifndef BOOST_FUSION_ADAPTED_STRUCT_DETAIL_BEGIN_IMPL_HPP
#define BOOST_FUSION_ADAPTED_STRUCT_DETAIL_BEGIN_IMPL_HPP
#include <boost/fusion/iterator/basic_iterator.hpp>
namespace boost { namespace fusion { namespace extension
{
template<typename>
struct begin_impl;
template <>
struct begin_impl<struct_tag>
{
template <typename Seq>
struct apply
{
typedef
basic_iterator<
struct_iterator_tag
, random_access_traversal_tag
, Seq
, 0
>
type;
static type
call(Seq& seq)
{
return type(seq,0);
}
};
};
template <>
struct begin_impl<assoc_struct_tag>
{
template <typename Seq>
struct apply
{
typedef
basic_iterator<
struct_iterator_tag
, assoc_struct_category
, Seq
, 0
>
type;
static type
call(Seq& seq)
{
return type(seq,0);
}
};
};
}}}
#endif
| {
"pile_set_name": "Github"
} |
{
"images" : [
{
"idiom" : "universal",
"filename" : "ic_check_box.png",
"scale" : "1x"
},
{
"idiom" : "universal",
"filename" : "ic_check_box_2x.png",
"scale" : "2x"
},
{
"idiom" : "universal",
"filename" : "ic_check_box_3x.png",
"scale" : "3x"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
},
"properties" : {
"template-rendering-intent" : "template"
}
} | {
"pile_set_name": "Github"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.