id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_4500
|
The app is basically a detailed form based questionnaire with 5 questions and each of the 5 view controllers consists of one question and its set of answer choices. In the first 4, user has multiple choice questions. In the last question user can choose multiple values from the set of options.
I have maintained an entity each for each view controller for the options to be displayed for that view controller.
To capture user's answers, I have created an entity named Answer with string attributes for answer for first 4 questions and a relationship with fifth entity so that I can capture set of answer choices selected by user for 5th question.
I also need to save user selections as and when user moves on from 1st question to 2nd to 3rd and so on and not in one go when user has answered all the questions.
Also, user can discard answers if he pops the first question's screen.
What is the best possible way to achieve it?
I was looking for the following options -
*
*Create an Answer entity record before coming to the first question view controller. Also a managed object context (moc) too. I then keep a moc property in each of the 5 view controllers and then pass on the moc created before coming to first controller from the first controller to fifth controller along with the Answer managed object. Save in this moc whenever user moves from one question to next.
*Create a DataCollector type of Singleton class where I have an init method to create Answer entity record and methods for creating moc and saving to moc. Then from each question I refer to this Answer managed object and also share the same moc.
Please advice.
A: The easiest way I can see to do this is to just transfer all of the answers along the way into the next view controller using the prepare(for segue:) method. You would do this by saying
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
let secondScene = segue.destination as! (nextController)
secondScene.answerQuestion1(declared variable in nextController) = (value you are transferring)
}
When you move to the next UIView, in the view controller, after the class declaration you can simply declare the variable you want to store the value in, so for the UIView corresponding to the next question,
class QuestionTwo: UIViewController {
var answerQuestion1
override func viewDidLoad() {
super.viewDidLoad()
Repeat this for all of the view controllers, and by the time you are at the 5th, you will have all of the answers to the first 4 questions as well as the 5th, at which point you can save it in CoreData and then clear the values from the variables.
| |
doc_4501
|
I'm using expo snack for developing an app with google authorization and as soon as I try
to use this function I get that error (This one). I've included the firebase initialization code in that image as well.
| |
doc_4502
|
Screenshot below:
Command I ran:
react-native run-ios
A: Like @theoretisch and @JoseVf mentioned before I have, please provide more information and what you've tried so far.
By putting in time to ask a question, you will increase the chance of getting a good answer quickly. Also please refer to How do I ask a good question? section for more info on that.
All that said, in the spirit of helping you out the following would fix your issue.
Use fontFamily: 'System' instead of fontFamily: 'Roboto'
Explanation
You are trying to find Roboto font on iOS where it's not included by default in the operating system. On Android however, it is.
So by giving fontFamily: 'System' you are saying to React Native pick the default system font family thats default to the current platform your running on. For iOS this is going to be San Francisco and for Android this will be Roboto
Note that if you want to show Roboto font family on both platforms (your design might be as such) then you need to include that said font in your react native app bundle and then you wouldn't get this issue.
A: I had exactly same issue. Project building perfect in android and build failing in iOS.
To fix this I did these steps:
1) Added Roboto.tff file in <projectRoot>/assets/fonts folder
2) Added file in Xcode project Resources:
3) Added line <string>Roboto.ttf</string> in UIAppFonts key in Info.plist file
<key>UIAppFonts</key>
<array>
....
<string>Roboto.ttf</string>
</array>
4) Checked that file exists in Build Phases Copy Bundle Resources
Ran Product->Clean Build Folder, Restarted JS server, Rebuilt
| |
doc_4503
|
CASE WHEN convert(int,d.ApplyEscalatorAfterHowManyYears)>0 AND
y.num>=convert(int,d.ApplyEscalatorAfterHowManyYears) THEN ((convert(money,d.AnnualAmount)*(d.Escalator*@counter))/100) else 0 end)
there might be some syntax error.
I don't know whether the following query will be help in getting the understanding or not.
declare @counter int
set @counter = 1
;WITH myTbl AS (SELECT * FROM(
SELECT src.ForecastAccountID, src.AccountName, src.RepeatNumber,src.AttributeName,src.Value
FROM (SELECT a.ForecastID, a.ForecastAccountID, a.RepeatNumber, a.AccountNumber, c.AccountName, src.AttributeName, fa.Value
FROM CoA c WITH (NOLOCK) INNER JOIN (Forecast_Account a WITH (NOLOCK) INNER JOIN (
(SELECT s.AttributeSetID, s.AttributeSetName, a.AttributeID, a.AttributeName, a.ColumnOrder, a.SignMultiplier
FROM Attribute_Set s INNER JOIN Attribute a ON s.[AttributeSetID] = a.[AttributeSetID]
WHERE (((s.AttributeSetID)=3))) src
INNER JOIN Forecast_Attribute fa WITH (NOLOCK) ON src.[AttributeID] = fa.[AttributeID]) ON a.[ForecastAccountID] = fa.[ForecastAccountID]) ON c.AccountNumber = a.AccountNumber
WHERE (((a.ForecastAccountID)=332))) src
GROUP BY src.ForecastAccountID, src.AccountName, src.RepeatNumber,src.AttributeName,src.Value
) AS t
PIVOT (min(Value) FOR AttributeName IN ([Counterparty],[Memo],[CoverPeriodBegin],[CoverPeriodEnd],[PaymentFrequency],[AdditionalYearsToRepeat],[AnnualAmount],[Escalator],[ApplyEscalatorAfterHowManyYears],[Payment1Date],[Payment1Percent],[Payment2Date],[Payment2Percent],[Payment3Date],[Payment3Percent],[Payment4Date],[Payment4Percent])) AS pvt),
num(num) AS (SELECT 0 UNION ALL SELECT num+1 FROM num WHERE num < 60)
--INSERT INTO Forecast_Data(ForecastAccountID,CashGAAP,TheDate,Amount,LastUpdated,UpdatedBy)
SELECT d.ForecastAccountID, 'GAAP' AS CashGAAP, dateadd(M, (x.num + 12*y.num), convert(datetime,d.CoverPeriodBegin)) AS TheDate,
Round((convert(money,d.AnnualAmount)+ (CASE WHEN convert(int,d.ApplyEscalatorAfterHowManyYears)>0 AND
y.num>=convert(int,d.ApplyEscalatorAfterHowManyYears) THEN ((convert(money,d.AnnualAmount)*(d.Escalator*@counter))/100) else 0 end))/(DATEDIFF(M,d.CoverPeriodBegin,d.CoverPeriodEnd)+1),2) AS Amount,
GETDATE() AS LastUpdated,
'jhogg1' AS UpdatedBy,y.num FROM num x,num y, myTbl AS d
WHERE (x.num BETWEEN 0 AND (datediff(M, convert(datetime,d.CoverPeriodBegin), convert(datetime,d.CoverPeriodEnd)))) AND (y.num BETWEEN 0 AND convert(int,d.AdditionalYearsToRepeat));
i want to increment the value of @counter in case statement along with the calculation.
A: I know this logic but it won't help in your example :
declare @mytest table (val1 int)
declare @test int = 60
declare @cur int = 0
declare @counter int = 1
while (@cur < @test)
BEGIN
insert into @mytest select @cur
set @cur = @cur + 1
END
select @counter = @counter + case when val1%2=0 then 1 else 0 end)
from @mytest
select @counter
For me you cannot do that so easily. Sql server is managing your data as one package, it's not imaginable for him to do an iterate. You have other software to do that :) not such a complicated logic.
Perhaps you could try to execute your code with dense_rank, or rank_number.
if the performance are not so bad i would do like this :
declare @mytest table (val1 int)
declare @test int = 60
declare @cur int = 0
while (@cur < @test)
BEGIN
insert into @mytest select @cur
set @cur = @cur + 1
END
select case when b.val1%2=0 then b.val1 else 0 end + ISNULL(p.counters,0)
from @mytest b
left outer join (select val1, ROW_NUMBER() over (order by val1) as counters from @mytest where val1%2 =0) p on p.val1 = b.val1
This is of course a simple example, but it should fit to what you want to do with more complex join and where clause :)
| |
doc_4504
|
Design a Payroll class that has fields for an employee's name, ID number, hourly pay rate, and number of hours worked. Write the appropriate accessor and mutator methods abd a constructor that accepts the employee's name and ID number as agruments. The class should also have a method that returns the employee's gross pay, which is calculate as the number of hours worked multiplied by the hourly pay rate. Write a program that demonstrates the class by creating a Payroll objectm then asking the user to enter the data for an employee. The program should display the amount of gross pay earned.
Heres what I have for the class
import java.util.Scanner; //Needed for scanner class.
public class Payroll
{
private String EmployeeName;
private int IDnumber;
private double HourlyPayRate;
private double HoursWorked;
private double GrossPay;
/**
Constructor
@param Name The name to store in EmployeeName.
@param ID The ID to store in Employee ID number.
*/
public Payroll(String Name, int ID)
{
EmployeeName = Name;
IDnumber = ID;
}
public String getEmployeeName()
{
return EmployeeName;
}
public int getIDnumber()
{
return IDnumber;
}
public void setHourlyPayRate(double HourlyRate)
{
HourlyPayRate = HourlyRate;
}
public double getHourlyPayRate()
{
return HourlyPayRate;
}
public void setHoursWorked(double hoursWorked)
{
HoursWorked = hoursWorked;
}
public double getHoursWorked()
{
return HoursWorked;
}
public double getGrossPay()
{
return HourlyPayRate * HoursWorked;
}
}
The constructor
import java.util.Scanner; //Needed for Scanner class.
public class PayrollTest
{
public static void main(String[] args)
{
String EmployeeName;
int IDnumber;
double HoursWorked;
double HourlyPayRate;
double GrossPay;
//Create a Scanner object for keyboard input.
Scanner keyboard = new Scanner(System.in);
//Get the employee's name.
System.out.println("Enter an employee's name: ");
EmployeeName = keyboard.nextLine();
//Get the employee's ID.
System.out.println("Enter the employee's ID " );
IDnumber = keyboard.nextInt();
//Get the number of hours worked by the employee.
System.out.println("Enter the amount of hours worked by this employee: ");
HoursWorked = keyboard.nextDouble();
//Get the hourly pay rate of the employee.
System.out.println("Enter the hourly pay rate for this employee: ");
HourlyPayRate = keyboard.nextDouble();
//Create a payroll object, passing EmployeeName and IDnumber
// as arguments to the constructor.
Payroll pay = new Payroll(EmployeeName, IDnumber);
//Get the Gross Pay of the employee.
System.out.println("The gross pay of " + EmployeeName + " is: " + pay.getGrossPay());
}
}
When I compile it, I didn't get any error but i keep getting the result of grosspay = 0.0 at the end.
A: You create a new Payroll object
Payroll pay = new Payroll(EmployeeName, IDnumber);
and call the getGrossPay() method on it in your print statement
pay.getGrossPay()
that method does the following:
return HourlyPayRate * HoursWorked;
since you never set HourlyPayRate or HoursWorked, of course the method is going to return 0.
To fix this, set those values to the variables you read from the user:
pay.setHoursWorked(HoursWorked);
pay.setHourlyPayRate(HourlyPayRate);
You can add those lines after you create the Payroll object but before you call the getGrossPay() method. So the last four lines of your code should now look like this:
Payroll pay = new Payroll(EmployeeName, IDnumber);
//set pay rate and hours worked
pay.setHoursWorked(HoursWorked);
pay.setHourlyPayRate(HourlyPayRate);
//Get the Gross Pay of the employee.
System.out.println("The gross pay of " + EmployeeName + " is: " + pay.getGrossPay());
| |
doc_4505
|
"source information is missing from the debug information for this module."
I use callbacks in my code in C++ concert CPLEX. When I run the code without using a callback I don't see the error message but when I use callback I do see the error message. Some of the parameters in my code are matrix 10*5. When I run the code with matrix 5*5 I don’t get any error but with matrix 10*5 I get an error.
I don’t have any information about dll and PDB file. I don’t know how to fix this error in visual studio 2015 on windows 10? I read some topics about this error on StackOverflow but I mixed up and I don’t know which one is good for my problem.
A: This message about source information missing is an informational message and not really an error. What it is saying is that you are trying to single step with the debugger through a part of your program for which the debugger does not have the source code. The result is that you can step through the program viewing the assembler generated by the debugger but the debugger can not show you the actual source code of the program at the point you are currently looking.
The fix for this is to get the source code and make it available to the debugger.
However you may not need to do this.
What it sounds like is happening is the following.
You are stepping through your program with the debugger and at some point the CPLEX functionality, which is doing some kind of an asynchronous task in parallel with the part of the program you are stepping through begins to perform some kind of action that will result in your callback being triggered. The current instruction is within the CPLEX functionality and the debugger does not have access to the source code and the descriptive information in the .pdb file generated by the compiler.
This article, Specify symbol (.pdb) and source files in the Visual Studio debugger (C#, C++, Visual Basic, F#), about the Visual Studio debugger has this to say about the informational message you are seeing:
There are several ways for the debugger to break into code that does
not have symbol or source files available:
*
*Step into code.
*Break into code from a breakpoint or exception.
*Switch to a different thread.
*Change the stack frame by double-clicking a frame in the Call Stack window.
Background on callbacks and asynch processing
Some background information on the callback concept: Wikipedia article Callback (computer programming), What is a callback function? , What is a "callback" in C and how are they implemented? .
When you are not using callbacks then there is no asynchronous task being performed, the use of CPLEX is synchronous in that you do a call into a CPLEX function, it returns with a result, and then your program continues after waiting for the result. With asynchronous, you do a call into the CPLEX functionality that starts an asynchronous task and then immediately returns without finishing with the expectation that when the task does finish, your callback will be triggered.
When CPLEX triggers the callback, because you are single stepping and the program does a sudden transfer of control to the CPLEX functionality between one step and the next in the debugger, you are suddenly single stepping through the CPLEX source code but the debugger doesn't have that source. So it issues an informational message telling you that it can't find the source and giving you other options.
Workaround debug procedures
What I do under these circumstances is to set a breakpoint in the callback so that if I then just do a Run command, the callback will be triggered and then execution will stop at that breakpoint in my source and then I can continue single stepping through the callback function source.
The problem you may run into is when the callback is on one thread and the other execution path you were following is on another thread. Then what happens is the debugger is swapping between the two threads of execution and single stepping becomes more difficult as the running thread changes from one place in your program to another. To get around this usually requires setting breakpoints or manually changing the currently executing thread with the debugger.
However if this functionality is single threaded then you should be able to just set the breakpoint in the callback and then when the callback is triggered by the CPLEX functionality, execution will jump to that point and hit the breakpoint. You can then single step through the callback functionality and when it returns back to the CPLEX functionality just press Run to let it continue.
| |
doc_4506
|
reviewer | reviewee
===================
2 | 1
3 | 2
4 | 3
5 | 4
In a function call, I know both a reviewer-id and a reviewee-id (the owner of the item the reviewee is looking to retrieve).
I'm now trying to send a query that iterates all the entries in the reviewers table, starting with the reviewer, and ends at the reviewee's id (and matches that to the reviewee id I know). So I'm trying to find out if there is a connection between reviewee and reviewer at all.
Is it possible to do this in a single query?
A: You can do this:
WITH CTE
AS
(
SELECT reviewer, reviewee
FROM TableName
WHERE reviewee = @revieweeID
UNION ALL
SELECT p.reviewer, p.reviewee
FROM CTE c
INNER JOIN TableName p ON c.reviewee = p.reviewer
)
SELECT *
FROM CTE;
--- WHERE reviewer = @reviewerID;
Demo
| |
doc_4507
|
dat <- as.data.frame(cbind(time=c(1980:2019), value=rnorm(40)))
head(dat)
time value
1 1980 -1.7196037
2 1981 -0.8135920
3 1982 -0.7451891
4 1983 0.7011315
5 1984 0.5657109
I transformed the "value" become time series,
ts(dat$value, start=1980, end=2019)
but when I try to plot the confidence intervals, it doesn't work,
ggplot(dat, aes(x=time, y=value)) +
geom_line() + geom_hline(yintercept=0) +
geom_ribbon(aes(ymin=mean(value)-sd(value),
ymax=mean(value)+sd(value)), alpha=0.05)
but it returned...
Can someone helps me, thanks.
A: First of all, I'm not sure why you converted to ts as {ggplot2} can't use it and you didn't assign it back to dat in any case.
Also in your data you have only one y value at each x value. Therefore it doesn't make sense to calculate any error or confidence interval. For example sd(1) returns NA.
But the main issue is that you are calculating your ribbon off of all the data points so it's just one big rectangle.
If you have multiple y values at each x value, then you can use stat_summary() to calculate the mean and error as desired. See example below:
library(tidyverse)
d <- data.frame(x = rep(1:5, 5), y = runif(25))
d %>%
ggplot(aes(x, y)) +
geom_point() +
geom_line(stat = "summary", size = 2) +
stat_summary(fun = mean,
fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "ribbon",
alpha = 0.5)
Created on 2022-04-07 by the reprex package (v2.0.1)
| |
doc_4508
|
At the moment I have a solution, but it is inconvenient, you need to forward all methods through 1 component. Example:
public inteface ITextView
{
void SetText(string text);
}
public ButtonView : Monobehaviour, ITextView
{
[SerializedField] private Text _text;
public void SetText(string text)
{
_text.text = text;
}
}
public SomeWindow : Monobehaviour, IButtonView
{
[SerializedField] private TextView _textView;
public void SetText(string text) => _textView.SetText(text);
}
In this case, with the growth of inherited interfaces, the forwarding of methods grows.
As a possible solution to the problem, it is also possible to simply create an interface that stores references to all dependencies. Example:
public interface ISomeWindowFacade
{
ITextView TextView { get; }
//Some dependence
//Another one
}
But in this case, I will pass unnecessary dependencies to most classes
Is it possible to store links to the required dependencies in SomeWindow and bind the rest after its creation?
public Installer : ScriptableObjectInstaller
{
[SerializedField] private SomeWindow _window;
public override void InstallBindings()
{
Container.BindInterfacesTo<ISomeWindowFacade>.FromComponentInNewPrefab(_window).AsSingle();
}
}
A: Perhaps I did not state the problem correctly, but here's the gist. There is a SOInstaller that stores a link to the prefab, at the time of the bind I created an instance of this component and wanted to receive the component itself and its children as separate components. I found solutions to this problem:
*
*The implementation of the component must store the MonoBehaviour list, you can make an abstract class that stores a link to the list
public interface IWindowView{}
public ConcreteWindow : MonoBehaviour, IWindowView
{
[SerializedField] private List<MonoBehaviour> _children;
public List<MonoBehavior> Children => _children;
}
*In the installer (as in my case in SOInstaller) we add a link to the prefab with the component and instantiate the component and bind it
public class SomeInstaller : ScriptableObjectInstaller
{
[SerializeField] private ConcreteWindow _window;
public override InstallBindings()
{
//If you use an abstract class, you can convert to it
var window = Container.InstatiatePrefabForComponent<ConcreteWindow>(_window);
Container.BindInterfacesTo(window.GetType()).AsSingle();
}
}
*Now we can go through all the child elements, inject them and bind
public class SomeInstaller : ScriptableObjectInstaller
{
[SerializeField] private ConcreteWindow _window;
public override InstallBindings()
{
//Previous
foreach(var component in window.Children)
{
Container.Inject(component);
Container.BindInterfacesTo(component.GetType()).AsSingle();
}
}
}
As a result, we get the following, we have created an object that stores the components and can receive them separately in the desired class
public class SomeClass
{
//ISomeView - child implementer
public SomeClass(IWindowView windowView, ISomeView someView)
{
}
}
| |
doc_4509
|
{ firstname: { $not: { $in: ["Alice", "Bob"] } } }
But now I have to match against first name + last name (i.e. the given list is ["Alice Smith", "Bob Jones"]).
I know I can concatenate the two fields easily like this:
{ $concat: ["$firstname", " ", "$lastname"] }
But how do I use this new "field" in the initial query like I used firstname there? Obviously, I can't just replace the object key with this expression.
This answer is pretty close, but unfortunately it's missing the last piece of information on how exactly one uses that solution in the $in context. And since I think this is a general usage question but couldn't find anything about it (at least with the search terms I used), I'm opening this separate question.
Edit: If possible, I want to avoid using an aggregation. The query I'm looking for should be used as the filter parameter of the Node driver's deleteMany method.
A: Indeed you are really close.
You have to use an aggregate. It's a sequence of "stages" where in each stage you can transform the data and pass the result to the next stage.
Here is a solution; Try it Here
with a $project i create a new field full_name by using your $concat
Then with a $match, I use your condition { firstname: { $not: { $in: ["Alice", "Bob"] } } } but I instead apply it to the newly created full_name
You can remove the $match in the mongoplayground and see what it does.
PS : there is a mongo operator $nin that does the combination of $not and $in
db.collection.aggregate([
{
"$project": {
"full_name": {
$concat: [
"$firstname",
" ",
"$lastname"
]
}
}
},
{
$match: {
full_name: {
$nin: [
"Alice In wonderland",
"Bob Marley"
]
}
}
}
])
A: You can use $expr, and for not equal to use $not outer side of $in,
db.collection.aggregate([
{
$match: {
$expr: {
$not: {
$in: [
{ $concat: ["$firstname", " ", "$lastname"] },
["Alice In wonderland", "Bob Marley"]
]
}
}
}
}
])
Playground
| |
doc_4510
|
Whats the best way to integrate this into my Eclipse-CDT build?
*
*Ideally I can reference the external tool using a relative path
*Ideally Eclipse will know if I change this file that it needs to re-run the external tool
I've tried out adding something to the 'Builders' section under Project Properties with mixed results.
thx
*
*Alex
A: I got this working well by adding a 'Builder' of type 'Program'.
Right click on the project, Click Properties, Click New ..., Add the location of the file you want to execute, as well as any command line arguments.
A: I'm using cmake with eclipse cdt
it provides an generator to generate the whole eclipse cdt project
you just have to setup your CMakeLists.txt file and then run the following command in your project directory:
cmake -G"Eclipse CDT4 - Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug .
after that eclipse uses the cmake generated make file and regenerates it when you change the CMakeLists.txt
there exists tutorial for cmake and cdt
you have to check whether cmake suits your needs
A: Relatively early in our projects' lifetime, I felt like I was running into too many limitations of Eclipse CDT's managed builds, so I switched to Make.
This isn't as bad as it might sound; Eclipse CDT integrates well with make (running it with the configuration you chose and parsing its results), and you can use Eclipse CDT's generated makefiles as a starting point.
Once you're using make, you can easily add a custom build step.
| |
doc_4511
|
regex: /<body.*[^>]>/i
test string: <bla bla ><body class='a b ... what ever..d'><fo bar>
Target: <body class='a b ... what ever..d'>
example: http://jsfiddle.net/bogdanm/qbccq79z/
Problem:
instead of matching <body class='a b ... d'> it selects <body class='a b ... what ever..d'><fo bar>
A: The * is a greedy quantifier and works well with negated classes. The problem you are having is with using the quantifier with . that makes the regex engine match ("read") all up to the end looking for a character other than > and then >. The regex engine finds this combination at the final r>. So, you get the wrong match.
You need to set the * quantifier to [^>] negated character class and remove .* that is matching all up to the end:
var testStr= "<bla bla ><body class='a b c d'><fo bar>";
var reg = /<body[^>]*>/i;
var match = reg.exec(testStr);
if(match != null){
// we know regex matched
alert(match[0].toString() + '\n');
}
Here is the updated demo
| |
doc_4512
|
Config:
@Bean
public FreeMarkerConfigurer freeMarkerConfigurer() {
FreeMarkerConfigurer config = new FreeMarkerConfigurer();
config.setTemplateLoaderPath("/WEB-INF/templates/ftl/");
Properties props = new Properties();
props.put("template_update_delay", getFreemarkerUpdateDelay());
props.put("template_exception_handler", getFreemarkerExceptionHandler());
props.put("url_escaping_charset", WebConstants.CHAR_SET_UTF_8);
config.setFreemarkerSettings(props);
config.setDefaultEncoding(WebConstants.CHAR_SET_UTF_8);
return config;
}
We use Freemarker 2.3.28 and Spring 5.0.7.RELEASE
TIA for any ideas on how to turn off escaping of the output from the macros.
A: Looking at https://github.com/spring-projects/spring-framework/blob/master/spring-webmvc/src/main/resources/org/springframework/web/servlet/view/freemarker/spring.ftl, it starts with <#ftl output_format="HTML" ...>, so that particular template has activated HTML auto-escaping for itself, which is stronger than the outputFormat Configuration setting. Since MessageSource.getMessage can only return String (and thus not a TemplateMarkupOutputModel, which is not auto-escaped by FreeMarker), it seems to me that the maintainers has made a serious oversight here. They have broken backward compatibility very much (assuming it indeed didn't escape in Spring 4), but worse, I don't see how they wanted to support not escaping. There's no such variation of spring.message or anything. (And it yet again strikes back that spring.message isn't a function, because then you could apply ?noesc on it.) So you should report it to them.
Update: Earlier I have recommended disabling auto-escaping on spring.ftl via template_configurations and auto_escaping_policy, but it turns out that's wrong, since some macros did escaping in Spring 4 via ?html, which they have removed when switching to auto-escaping. So then those won't escape, which is wrong again...
| |
doc_4513
|
My function:
add_action($VARS_ARRAY);
This function should include $VARS_ARRAY into A class.
How can I do it in PHP?
A: you don't need a function for this, you can just add a value directly.
$a = new A();
$a->vars_array = $VARS_ARRAY;
| |
doc_4514
|
Its working good up until now...
This is how far i've come
var formId = ["Name", "E-mail", "Subject", "Message"];
$.each(formId, function(i) {
$("input, textarea").focus(function() {
if ($(this).val() === formId[i])
$(this).val("");
}).focusout(function() {
if ($(this).val() === "")
$(this).val(formId[i]);
});
});
Its working perfectly up until the focus out part. When i focus out the input field, it repopulates the value with Name in every input field.. Have tried my way with different solutions but this is as close i've gotten.. Any one got any idea?
A: This doesn't seem to be the best way to tackle this problem. I saw a solution like the following on a blog before. You add the placeholder attribute to the inputs, so they will work on modern browsers, and you add jQuery to inputs with placeholder attributes, like [placeholder]:
$('[placeholder]').focus(function() {
var input = $(this);
if (input.val() == input.attr('placeholder')) {
input.val('');
input.removeClass('placeholder');
}
}).blur(function() {
var input = $(this);
if (input.val() == '' || input.val() == input.attr('placeholder')) {
input.addClass('placeholder');
input.val(input.attr('placeholder'));
}
}).blur();
$('[placeholder]').parents('form').submit(function() {
$(this).find('[placeholder]').each(function() {
var input = $(this);
if (input.val() == input.attr('placeholder')) {
input.val('');
}
})
});
I can't find the blog right now, which is a shame, because this code, I believe, is completely copied from that site and works great, cross-browser.
How this works, and why this is a better approach:
Instead of putting handlers on every input and textarea element, you can easily filter exactly which ones should have handlers on them: those that have a placeholder attribute.
A .submit() handler will keep the default text from being posted as values.
Maybe I should speak on what I feel are problems with your code:
You're using an each function to call multiple focus handlers, which are quite similar. You probably don't need the each function. You could just use $('input, textarea').focus() because that will add a handler to each input. Then, you could check if $.inArray($(this).val(),formId). Docs for inArray().
A: forget about var formId, instead add an attribute data-placeholder in your input/textarea html tag with the appropriate values, then :
$("input, textarea").on({
focus : function() {
var $t=$(this);
if ($t.val() ===$t.data("placeholder"))
$t.val("");
},
blur : function() {
var $t=$(this);
if ($t.val() === "")
$t.val($t.data("placeholder"));
}
});
| |
doc_4515
|
Example 122244445577777
1 222 444 4 55 777 77
Answer 3
A: As the language/tool is not mentioned, I'll add the answer for some languages. However, the same RegEx can be used in any language with little or no modification.
([0-9])\1\1
Here, ([0-9]) will match a digit and put it in first captured group, which can later be accessed by using back-reference \1.
The same RegEx can also be written as
(\d)\1{2}
If you want to use this with any character and not just numbers, you can use following RegEx.
(.)\1\1
RegEx Live Demo on RegEx101
Python:
>>> s = '122244445577777'
>>> import re
>>> re.findall(r'(.)\1{2}', s)
['2', '4', '7']
>>> re.findall(r'((.)\2{2})', s)
[('222', '2'), ('444', '4'), ('777', '7')]
>>> len(re.findall(r'(.)\1{2}', s))
3
>>> len(re.findall(r'(\d)\1{2}', s))
3
JavaScript:
var input = "122244445577777";
var regex = /([0-9])\1{2}/g;
var tripletCount = (input.match(regex) || []).length;
document.write(tripletCount);
PHP:
echo preg_match_all('/([0-9])\1{2}/', "122244445577777", $matches);
A: In all the languages I know of you cannot do this with just a regex, you need to use some of the language's functions.
However the actual answer should be very short and uncomplicated. If you give us the language your using we can provide an actual answer.
In most languages it is probably simpler to do this without regex, as you need yo get the count of a random letter/number not a specific.
A: After some frustration I arrived at this...
/1{3}|2{3}|3{3}|4{3}|5{3}|6{3}|7{3}|8{3}|9{3}|0{3}/
Must be a more elegant way? Its PHP btw which doesn't allow the g modifier
| |
doc_4516
|
(It 's compiled under the VisualStudio2012 Pro without any errors.) But when I execute script via the commandline I have got the error:
error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\WindowsPhone\v4.5\Microsoft.WindowsPhone.v4.5.Overrides.targets" was not found
In vs project:
<Import Project="$(MSBuildExtensionsPath)\Microsoft\$(TargetFrameworkIdentifier)\$(TargetFrameworkVersion)\Microsoft.$(TargetFrameworkIdentifier).$(TargetFrameworkVersion).Overrides.targets" />
Is any workaround here?
A: You can delete this import or change the $(TargetFrameworkVersion) onto "v8.0" Most probably your project file was created on more older VS2012.
A: You can update your Visual Studio 2012 and Win8 onto the latest version, and it will work without workarounds.
A: You can also specify the Visual Studio version to be 2012 in msbuild by using /p:VisualStudioVersion=11.0. See here.
| |
doc_4517
|
I'm posting this for myself and anyone who might find it useful.
You'll need to get a dropbox access token, that can be obtained after creating a dropbox app.
A: function send2dropbox(file) {
var dropboxTOKEN = 'XXXxxx';
var path = '/somePath/' + file.getName();
var dropboxurl = 'https://api.dropboxapi.com/2/files/save_url';
var fileurl = 'https://drive.google.com/uc?export=download&id=' + file.getId();
var headers = {
'Authorization': 'Bearer ' + dropboxTOKEN,
'Content-Type': 'application/json'
};
var payload = {
"path": path,
"url": fileurl
}
var options = {
method: 'POST',
headers: headers,
payload: JSON.stringify(payload)
};
var response = UrlFetchApp.fetch(dropboxurl, options);
return response;
}
You can find an example HERE
| |
doc_4518
|
data.php
<?php
function dbConn() {
// some database connection here using PDO...
}
function getPosts() {
return db()->query('SELECT u.username,p.* FROM users u JOIN posts p on u.id = p.user_id')->fetchAll(PDO::FETCH_ASSOC);
}
index.php
<?php
$results = getPosts();
print("<pre>".print_r($results,true)."</pre>");
?>
<div x-data="posting()" x-init="fetchPost(<?php json_encode($results); ?>)">
<template x-for="post in posts" :key="post.id">
<h2 x-text="post.title"></h2>
<p x-text="post.content"></p>
</template>
</div>
app.js
document.addEventListener("alpine:init", () => {
Alpine.data("posting", () => ({
fetchPost: (data) => {
this.posts = data;
console.log(this.posts);
},
posts: [],
}));
});
Here is the result on my console and above is my array structure:
thank you in advance!
A: You almost got it right. You forgot to use echo.
<div x-data="posting()" x-init="fetchPost(<?php echo htmlspecialchars(json_encode($results), ENT_QUOTES, 'UTF-8', true) ?>)">
| |
doc_4519
|
This experience beats the purpose of seamlessly switching branches without having to worry about unintended creeping in.
The steps are highlighted below.
somasundaram.s@user /d/projects/repositories/newrepo (master)
$ ls -ltr
total 1
-rw-r--r-- 1 somasundaram.s 1049089 13 Apr 4 16:28 README
-rw-r--r-- 1 somasundaram.s 1049089 0 Apr 4 16:31 hi
somasundaram.s@user /d/projects/repositories/newrepo (master)
$ git branch new-branch
somasundaram.s@user /d/projects/repositories/newrepo (master)
$ git checkout new-branch
Switched to branch 'new-branch'
somasundaram.s@user /d/projects/repositories/newrepo (new-branch)
$ touch newfile
somasundaram.s@user /d/projects/repositories/newrepo (new-branch)
$ ls -ltr
total 1
-rw-r--r-- 1 somasundaram.s 1049089 13 Apr 4 16:28 README
-rw-r--r-- 1 somasundaram.s 1049089 0 Apr 4 16:31 hi
-rw-r--r-- 1 somasundaram.s 1049089 0 Apr 4 16:37 newfile
somasundaram.s@user /d/projects/repositories/newrepo (new-branch)
$ git checkout master
Switched to branch 'master'
somasundaram.s@user /d/projects/repositories/newrepo (master)
$ ls -ltr
total 1
-rw-r--r-- 1 somasundaram.s 1049089 13 Apr 4 16:28 README
-rw-r--r-- 1 somasundaram.s 1049089 0 Apr 4 16:31 hi
-rw-r--r-- 1 somasundaram.s 1049089 0 Apr 4 16:37 newfile
A:
git checkout carries unstaged files to the new branch
Its not a bug.
The is how git behave.
In the following diagram you can see the 3 states.
Git has three main states that your files can reside in:
*
*committed
*modified
*staged
They are all shared between your branches. but when you checkout branch you change the HEAD so you end up with the staging area && working directory shared between your repository even when you checkout branches.
How to checkout different branch with clean working directory and stage area?
If you wish to checkout clean branch without any "leftovers" in your working directory and staging are you can create a new worktree which will result in shared view of your repository (all the content is shared) but with a different working directory and staging area.
From git v2.5
git worktree add <new_path>
Now do whatever you want in any of your branches. It will create 2 separate working folders separated from each other while pointing to the same repository.
Using wortree you don't have to do any clear or reset in order to remove all your staged and untracked content.
Here is demo of how to do it:
A: That's a feature, not a bug, and has been that way ever since, as far as I know. If git thinks that it can safely carry along your local modifications, it does so.
If you want to get rid of them, git reset --hard.
A: Git won't change files that are not currently tracked in the repository.
In your example, you only created an untracked file (newfile).
So the Git's behavior is absolutely normal.
If you git add newfile without commiting the changes, Git won't allow you to switch to master branch.
For example, this will be handled by Git:
$ git branch new-branch
$ git checkout new-branch
Switched to branch 'new-branch'
$ echo "test" > newfile
$ git add newfile
$ git checkout master
A newfile
Switched to branch 'master'
For in depth explanations:
https://stackoverflow.com/a/8526610/882697
| |
doc_4520
|
<?php
if($_SERVER['REQUEST_METHOD']=='POST'){
if(isset($_FILES['photo'])){
//Getting actual file name
$name = $_FILES['photo']['name'];
//Getting temporary file name stored in php tmp folder
$tmp_name = $_FILES['photo']['tmp_name'];
//Path to store files on server
$path = 'images/testimonial/';
//checking file available or not
if(!empty($name)){
//Moving file to temporary location to upload path
move_uploaded_file($tmp_name,$path.$name);
//Displaying success message
echo "Upload successfully";
}else{
//If file not selected displaying a message to choose a file
echo "Please choose a file";
}
}
}
?>
My AJAX code:
$('#uploadImage').submit(function(e){
//Preventing the default behavior of the form
//Because of this line the form will do nothing i.e will not refresh or redirect the page
e.preventDefault();
//Creating an ajax method
$.ajax({
//Getting the url of the uploadphp from action attr of form
//this means currently selected element which is our form
url: $(this).attr('action'),
//For file upload we use post requestd
type: "POST",
//Creating data from form
data: new FormData(this),
//Setting these to false because we are sending a multipart request
contentType: false,
cache: false,
processData: false,
success: function(data){
//If the request is successfull we will get the scripts output in data variable
//Showing the result in our html element
$('#msg').html(data);
},
error: function(){}
});
});
My HTML code:
<form id='uploadImage' action='ajaxupload.php' method='post' enctype='multipart/form-data'>
<input id="im2" type="file" name='photo' class="dropify-fr" data-default-file="" data-max-file-size="200K" />
<button>Upload</button>
</form>
It works fine in local and add the image in the specific folder but on server I get "upload successfully" without getting the image in the folder. Who have an idea please what's wrong?
| |
doc_4521
|
WHEN col1_alias='' THEN 'empty value'
ELSE 'has value'
END AS result,
(/* a complicated mysql SELECT statement */) AS col_alias
FROM my_table;
The above MySQL query gives me Unknown column 'col_alias' in 'field list' error.
Is it possible to generate the result column based on the value of col?
I don't want to write the complicated MySQL SELECT statement for the second time.
============================== EDIT ================================
Sorry, I forgot to mention that my real situation is more complicated than the query pasted above.
My real query contains JOIN and GROUP BY. Like this:
SELECT
my_table.id AS id,
CASE
WHEN col1_alias='' THEN 'empty value'
ELSE 'has value'
END AS result,
(/* a complicated mysql SELECT statement */) AS col_alias,
another_table.name AS name
FROM my_table
LEFT JOIN another_table
ON
`my_table`.`id` = `another_table`.`id`
GROUP BY
`another_table`.`name`;
Is it possible to avoid Unknown column 'col_alias' in 'field list' in this situation?
I think I might have to write part of the query results to a temporary table. Then write a second query that runs against the original and the temporary table.
However, I still wish that I can use only one query to accomplish the goal.
| |
doc_4522
|
Could you please give an implementation example to apply the steps at the above documentation?
I couldn't figure out what are the steps and how to apply.
Thanks in advance,
A: I'm going at this blind since I can't use my companies Azure Environment to test but in the least this will be a good start for you to be able to troubleshoot.
I set all the requested information as variables so that you can change them as you see fit. The part that is most questionable here is how Azure wants you to do the authorization header.
On the page you linked there is an option to "Try this" on that menu you should be able to build a custom API request and it will include the headers there.
Let me know how this does and I can help troubleshoot if there are issues.
$runcommandname = ""
$subscriptionId = ""
$resourcegroupname = ""
$vmName = ""
$apiKey = ""
$resource = "https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourcegroupname/providers/Microsoft.Compute/virtualMachines/$vmName/runCommands/$runcommandname"
$apiversion = "?api-version=2021-07-01"
$resource = $resource + $apiversion
$authHeader = @{
'Authorization' = "apiToken $apiKey"
}
Invoke-RestMethod -Method Put -Uri $resource -H $authHeader
A: You can also use the Invoke-AzRestMethod cmdlet to execute the PUT operation using the existing context. Consult the reference documentation for more details: https://learn.microsoft.com/powershell/module/az.accounts/invoke-azrestmethod
Alternatively, if the goal is to run a command on a VM, you can also consider using the Invoke-AzVMRunCommand cmdlet as described here: https://learn.microsoft.com/powershell/module/az.compute/invoke-azvmruncommand.
| |
doc_4523
|
{
"name1":
{
"fields":
{
"Name":
{
"type": "STRING"
},
"Email":
{
"type": "STRING"
},
"Password":
{
"type": "STRING"
},
"role":
{
"type": "STRING"
}
}
},
"name2":
{
"fields":
{
"url":
{
"type": "STRING"
}
}
},
"name1":
{
"fields":
{
"Address":
{
"type": "STRING"
}
}
}
}
I want to iterate through it and if there is a name which already exists I want to merge the fields; here is my code:
var DBSchema = [];
async.each(Object.keys(config), function(key) {
var currentObject = config[key];
var DBSchemaTemp = [];
for (var field in currentObject.fields) {
if (currentObject.fields.hasOwnProperty(field)) {
DBSchemaTemp[field] = {AttributeName: field,
AttributeType: currentObject.fields[field].type
}
}
}
var arrayInSchema = DBSchema[key];
if (typeof arrayInSchema === 'undefined') {
DBSchema[key] = [];
DBSchema[key].push(DBSchemaTemp);
} else {
DBSchema[key].concat(DBSchemaTemp);
}
}, function(err) {
console.log(err);
});
for (variable in DBSchema) {
console.log(variable);
console.log(DBSchema[variable]);
}
My desired output is:
{
name1: [ Name: { AttributeName: 'Name', AttributeType: 'STRING' },
Email: { AttributeName: 'Email', AttributeType: 'STRING' },
Password: { AttributeName: 'Password', AttributeType: 'STRING' },
role: { AttributeName: 'role', AttributeType: 'STRING' } Address: { AttributeName: 'Address', AttributeType: 'STRING' } ]
name2: [ url: { AttributeName: 'url', AttributeType: 'STRING' } ]
}
but my code returns:
{ name1: [ [ Address: { AttributeName: 'Address', AttributeType: 'STRING' } ] ] name2: [ [ url: { AttributeName: 'url', AttributeType: 'STRING' } ] ] }
Please note; for some reason it is adding double [[]] as well!
| |
doc_4524
|
get_props -type assert
{"a", "b", "c", "d"}
Now all these 4 objects have certain attributes associated with them. But I am interested in the "enabled" attribute only.
get_attribute [get_props a] enabled
true
get_attribute [get_props b] enabled
false
get_attribute [get_props c] enabled
true
get_attribute [get_props d] enabled
false
Now I want to convert only "enabled" objects (enabled = true) out of these 4 "assert" type objects into "cover" type objects (So only "a" & "c" should be converted) and for converting "assert" into "cover", the command is fvcover.
I have tried the following command:
fvcover [get_props -type assert]
Now the problem is that, this fvcover command converts all 4 "assert" type objects into "cover" type objects, instead of just "a" & "c".
So I guess, I need to combine both get_props & get_attributes command, but I don't know how to do it.
So how to solve this problem?
Note :- "a", "b", "c", "d" are just for explanation. In reality, get_props command may return any number of results with any name. But out of that list, I need to convert, only those objects, whose "enabled" attribute is true.
A: The lists are not in Tcl format. Here's some test code you can use to convert from your format to Tcl.
#### PROCS FOR TESTING ####
proc get_props {type {assert no}} {
if {$type == "-type" && $assert == "assert"} {
return {"a", "b", "c", "d"}
}
if {$type == "a" || $type == "c"} {
return [list enabled true]
} elseif {$type == "b" || $type == "d"} {
return [list enabled false]
}
return [list NOT FOUND]
}
proc get_attribute {a k} {
foreach {key value} $a {
if {$key == $k} {
return $value
}
}
return NOT_FOUND
}
# get props. props is in a list format that is not native tcl list
set props [get_props -type assert]
# convert props to tcl list
set props_list [string map {, ""} $props]
# make a list to catch enabled props
set enabled_props [list]
# add enabled props to new list
foreach {prop_name} $props_list {
if {[get_attribute [get_props $prop_name] enabled] == "true"} {
lappend enabled_props "\"$prop_name\""
}
}
# convert enabled_props to your format
set enabled_props "{[join $enabled_props ", "]}"
# run your program on $enabled_props
puts $enabled_props
| |
doc_4525
|
// From a list of arrays
var listOfArr = new List<Int32[]>();
listOfArr.Add(new Int32[] { 1, 2, 3 });
listOfArr.Add(new Int32[] { 1 });
listOfArr.Add(new Int32[] { 1, 2, 3, 4, 5, 6 });
var rangeWithArrays = ws.Cell(2, 3).InsertData(listOfArr);
Source
As such the three arrays of integers will get added as rows so I can directly add my list of arrays in one shot.
EDIT:
Seems like no one is understanding the question. I'll provide an example.
Lets say I get an input of List<string> { "a", "b" , "c", "d" }. Now how can I convert it into List<string[]> where string[0] {"a", "b" } and string[1] {"c", "d" } and so on. In other terms create string arrays of size 2.
Once again, this is because ClosedXML allows auto populating form of List and that is why I mention it.
A: Isn't it a List<string> to string[]?
string[] ArrayOfStrings = MyList.ToArray()
If you want parts with sizes, you can do it:
int size = 5;
List<string[]> ArrList = new List<string[]>();
for (var i = 0; i < myList.Count; i+=size)
{
ArrList.Add(myList.Skip(i).Take(size).ToArray());
}
I think that should do it.
A: Daniel is right. But I understood that you wanted a List of string arrays. In that case you can try something like this. I wrote that in Notepad so check for errors.
List<string[]> ConvertToListOfArrays(List<string> list, int arraySize)
{
List<string[]> listOfArrays = new List<string[]>();
foreach(string item in list)
{
string[] newArray = new string[arraySize];
newArray[0] = item;
listOfArrays.Add(newArray);
}
return listOfArrays;
}
A: string[] sampleList = { "a", "b", "c", "d" };
int splitFactor = 2;
List<string[]> splitedList = new List<string[]>();
for (int i = 0; i < sampleList.Length; i += splitFactor)
{
//Skip(i) means ignore (i) elements from start of sampleList
// and start from (i+1)th element.
//And Take(splitFactor) means give me an array of string to the size of (splitFactor)
//And finally .ToArray() convert the IEnumerable<string> to string[]
//Then we simply add it to splittedList
splitedList.Add(sampleList.Skip(i).Take(splitFactor).ToArray());
}
So what happens here is when "i"=0, the Linq query starts from it's first element and take the 2 elements (0 and 1), then in increment part of "for" "i" is going to +=2, so "i" is 2 now and the Linq query skip 2 elements at the beginning (skip 0 and 1) and starts to get the 2 elements from 2th index of array (3 and 4) and it continuous till "i" reaches the sampleList.Length (4)
I hope I explained enough, If you still need more explanation tell me.
| |
doc_4526
|
That it is, this works fine:
match (b:Book) where b.guid={guid} return b;
But, how to pass multiple guids as parameters for this query:
match (b:Book) where b.guid in [guid1,guid2,gid3] return b;
I am using neo4jphp client, my code is like this:
$client = new Everyman\Neo4j\Client( "neo4j server address", "7474" );
$result = new Everyman\Neo4j\Cypher\Query( $client, "match (b:Book) where b.guid={guid} return b", array('guid'=>$guid1) );
$res = $result->getResultSet();
A: You should pass an array as parameters, the query would look like this :
match (b:Book) where b.guid in {myMap} return b;
$client = new Everyman\Neo4j\Client( "neo4j server address", "7474" );
$result = new Everyman\Neo4j\Cypher\Query( $client, "match (b:Book) where b.guid in {MyMap} return b", array('myMap'=> array($guid1, $guid2, $guid3)) );
$res = $result->getResultSet();
| |
doc_4527
|
public function fault($fault = null, $code = 404);
Why we need to define this kind of function without any process, or code ?
A: If I look at your example, you are looking at an interface file. Interfaces need implementation by classes that chose to implement it. The file you are looking at is the Zend\Server\Server class, which is implemented for example by Zend\XmlRpc\Server.
If you look at that class, you'll see that fault() has been implemented in there.
More information about interfaces can be found here: php.net documentation
A: Zend_Json_Server
fault($fault = null, $code = 404, $data = null)
Create and return a Zend_Json_Server_Error object.
Zend_Json_Server and how to call it via JSON
| |
doc_4528
|
A: Your question actually has less to do with windows and JFrames and much more to do with the general issue of communication between object. If you want to change the state of an object, you can call one of its methods. The same can be done for your "JFrames", by having the active code, whatever or wherever that is, call a method of one of your other "non-active" display component objects, thereby having then alter their displays. Often the issue becomes one of when to call a method, and with event-driven GUI's, this often means use of an observer pattern of one sort or another.
If my answer seems a bit vague and general, I'm afraid that this is the best I can do given the information so far presented. If you need more specific help, then consider posting relevant code, and more information about your overall problem and program structure.
Also, read about The Use of Multiple JFrames, Good/Bad Practice? as your overall GUI design with its multiple JFrames, while a common newbie program design, may be annoying to the users. It may be better to display multiple views in other ways.
A: It is possible, basically one JFrame has to have a reference to the other. You said one of your JFrame is the main one, and the other a info one. The idea is to have a member variable which will be the reference to the other JFrame (or JPanel in your case). You can then use that reference to update it (repainting it, perhaps). Of course another solution would be to have a separate class which has references to all the JFrame/JPanel, and can therefore access them all. Note that you can also set the focus to a JFrame using requestFocusInWindow().
Here's a sample code to show you the principle.
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
public class Main
{
public static void main(String[] args)
{
JFrame childFrame = new JFrame("Child Frame");
JPanel childPanel = new JPanel();
childFrame.setSize(300, 200);
childFrame.add(childPanel);
MainFrame mainFrame = new MainFrame(childPanel);
mainFrame.setVisible(true);
childFrame.setVisible(true);
}
}
class MainFrame extends JFrame
{
private JPanel childPanel;
public MainFrame(JPanel childPanel)
{
super();
this.childPanel = childPanel;
setSize(300,200);
setTitle("Main Frame");
actOnChild();
}
public void actOnChild()
{
// Do something...
childPanel.add(new JLabel("I was updated with something!"));
}
}
The output:
You'll see the child (info) JFrame's panel will contain the JLabel, which we added through a method of the first JFrame, effectively updating the second JFrame from the first one. This is possible from any other class which has that reference.
| |
doc_4529
|
The "bits" variable don't change the value in the if statements and for me it's looks like it should work.
static string IPMask(string CIDR)
{
int intCIDR;
Int32.TryParse(CIDR, out intCIDR);
int bits = 32 - intCIDR;
string strmask = "11111111.11111111.11111111.11111111";
int dot;
if (bits < 8)
{
dot = 0;
}
else if (bits > 8)
{
dot = 1;
}
else if (bits > 16)
{
dot = 2;
}
else if (bits > 24)
{
dot = 3;
}
Console.WriteLine($"{dot}");
string Mask = strmask.Remove(strmask.LastIndexOf("") - bits + dot);
return Mask;
}
A: Let's say I put in 24. The code will come first to the check where bits > 8. 24 is greater than 8, and so that code will run. The later conditions are else if checks, meaning they won't even be attempted, because the prior condition already matched.
Second, let's say I put in 8 exactly. That doesn't match any of the conditions, meaning the dots variable would never be assigned.
You want something like this:
static string IPMask(string CIDR)
{
int intCIDR;
Int32.TryParse(CIDR, out intCIDR);
int bits = 32 - intCIDR;
string strmask = "11111111.11111111.11111111.11111111";
int dot = 0;
if (bits > 24)
{
dot = 3;
}
else if (bits > 16)
{
dot = 2;
}
else if (bits > 8)
{
dot = 1;
}
Console.WriteLine(dot);
return strmask.Remove(strmask.Length - bits + dot);
}
... though that last line still needs some work.
| |
doc_4530
|
{
"success": true,
"result": {
"name": "rocky",
"age": 10,
},
"error": null,
"unAuthorizedRequest": false
}
I want to delete ( "success": true,). I also want to change "result" to some other name.
How do I do this in ASP.NET MVC and JavaScript?
A: This JSON object should be a result of an JSON serialization that your application is made to one of your models (class) when it return a response.
Look for the model that is being serialized and modify it as you wish.
UPDATE:
in order to omit specific property in the serialization, use the JsonIgnore attribute, for example:
[JsonIgnore]
public bool Success{ get; set; }
A: You can write a model class for the response, and if you want to give other name to some of the properties like this
[DataMember(Name="othername")]
public string NameToChange { get; set; }
| |
doc_4531
|
import pandas as pd
df = pd.DataFrame(data=[[1, 'ABC'], [2, 'ABC'], [3, 'ABCDEF'], [1, 'ABCDEF']], columns=['id', 'marker'])
I'm effectively trying to do this SQL statement but in Pandas.
delete #table
from #table a, #table b
where a.id = b.id
and a.marker = b.marker + 'DEF'
Which would effectively get rid of the last row in the dataframe. Any idea how I can do this?
EDIT:
To clarify on the above, lets say the data is like this:
id marker
0 1 ABC
1 2 ABC
2 3 ABCDEF
3 1 ABCDEF
4 4 ABCDEF
The answer should be
id marker
0 1 ABC
1 2 ABC
2 3 ABCDEF
4 4 ABCDEF
(1, 'ABCDEF') is omitted because (1, 'ABC') is present, which is what the SQL statement would effectively do (i.e., delete all rows where IDs are equal and marker has 'DEF' attached). If for ID X there is only 'ABCDEF' present it would keep it, but if ID Y has both 'ABC' and 'ABCDEF' it will delete the 'ABCDEF'
A: This is one solution to get what you need. I changed the dataframe a little bit, added [2, 'ABCDEF'] to demonstrate that this code will keep ABC regardless if ABCDEF or ABC appears first.
df = pd.DataFrame(data=[[1, 'ABCDEF'], [2, 'ABC'], [2, 'ABCDEF'], [3, 'ABCDEF'], [1, 'ABC']], columns=['id', 'marker'])
df
id marker
0 1 ABCDEF
1 2 ABC
2 2 ABCDEF
3 3 ABCDEF
4 1 ABC
lst = df.values.tolist()
list_tuples = [tuple(l) for l in lst]
newdata = {}
for key,value in list_tuples:
newdata.setdefault(key, []).append(value)
newdata = {k:sorted(v) if len(v) > 1 else v for k,v in newdata.items()}
create_dataframe = {k:v[0] for k,v in newdata.items()}
df2 = pd.DataFrame(list(create_dataframe.items()), columns=['id', 'marker'])
df2.index = range(len(df2))
df2
id marker
0 1 ABC
1 2 ABC
2 3 ABCDEF
| |
doc_4532
|
My issue right now is, you can spam click everything and everything will show.
whenever you have the wrong pair the cards make the class "red" and they turn red in the css code. But you're still able to click the other cards and a few turn back to normal again after 600ms, some just stay and don't turn back.
is it possible to use an if/else condition for css? If there are "red" cards, the pointer event for normal cards are none. You can play the game here and test the issue yourself: https://memory-20.815374.repl.co
Fixing it without If/else is fine too. Using if/else is just the first thing that comes in my head.
here is the code for CSS:
.card.clicked {
background-color: orange;
pointer-events: none;
}
.card.checked {
background-color: lightgreen;
visibility: hidden;
transition: visibility 0s linear 300ms, opacity 300ms;
}
.card.clicked img,
.card.checked img {
opacity: 1;
}
.card.red {
background-color: #f15f5f;
}
.card {
height: 120px;
width: 100px;
background-color: #ff5cbb;
border-radius: 10px;
display: grid;
place-items: center;
cursor: pointer;
transition: 0.3s all ease;
}
Here is the JavaScript code:
} else {
const incorrectCards = document.querySelectorAll(".card.clicked");
incorrectCards[0].classList.add("red");
incorrectCards[1].classList.add("red");
setTimeout(() => {
incorrectCards[0].classList.remove("red");
incorrectCards[0].classList.remove("clicked");
incorrectCards[1].classList.remove("red");
incorrectCards[1].classList.remove("clicked");
}, 600);
| |
doc_4533
|
I have tried to install Visual Studio on the Windows 8 machine to compile it there but I only have the Express version, which doesn't come with the MFC libraries, so it will not compile. The PC I wrote the app on, was a Windows 7 pc.
I am not too sure what other information might be useful.
EDIT:
The error message
A: I have managed to compile MFC applications in older versions of Visual Studio Express, see:
http://www.codeproject.com/Articles/30439/How-to-compile-MFC-code-in-Visual-C-Express
To get this to work in a recent VS Express will probably require some tinkering.
| |
doc_4534
|
A: It's an abbreviation for "ordinal".
Ordinal numbers are counting numbers
| |
doc_4535
|
var WebSocketServer = require('ws').Server
, http = require('http')
, express = require('express')
, app = express()
, port = process.env.PORT || 5000;
var server = http.createServer(app);
server.listen(port);
var wss = new WebSocketServer({server: server});
console.log('websocket server created');
wss.on('connection', function(ws) {
var id = setInterval(function() {
ws.send(JSON.stringify(new Date()), function() { });
}, 1000);
console.log('websocket connection open');
ws.on('close', function() {
console.log('websocket connection close');
clearInterval(id);
});
});
A: You connected to HTTP server, but did not established WebSocket connection. That's why your script doesn't print anything.
I'm not sure you can test websocket manually, look at the handshake.
But there are a few telnet-like programs that work with websocket. Maybe wscat from ws module you're using will help with that.
A: I found https://www.websocket.org/echo.html To be useful. Just create a websocket.index file on your hard-drive if you are testing a localhost server. Code is on the bottom of the page.
A: UPD: as per @jouell's note, it looks like this tool is now out of date
This little tool did the job for me: https://github.com/lafikl/telsocket
brew tap pascaliske/telsocket
brew update
brew install telsocket
telsocket -url ws://127.0.0.1:50000
(for people who don't use npm)
| |
doc_4536
|
My question is...is there a way to accurately and reliably detect when such connection reset errors occur in PHP and more importantly to disregard that image fetch when that happens and move on with the script without the image fetch as if nothing has happened....
Also I'm using curl_multi_exec and curl_multi_getcontent since I'm parallelizing the process
I'm also calling the function imagecreatefromstring() and I found that the error probably has to do with imagecreatefromstring() since uncommenting that line ends all connection reset errors...
A: You can get more information on cURL errors by using curl_errno. See the first comment on that page for details.
A: curl_exec() returns a boolean FALSE if an error occurs, so
$result = curl_exec($handle);
if ($result === FALSE) {
... an error occured ...
}
| |
doc_4537
|
[Microsoft][ODBC SQL Server Driver][SQL Server]1
(Microsoft OLE DB Provider for ODBC Drivers)
any time this trigger is fired:
ALTER TRIGGER [dbo].[trgDR]
ON [dbo].[Deceased Register]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
declare @parishid int, @monthid int, @yearid int,@total float,@gtotal float
select @parishid =i.Parish_id from inserted i;
select @monthid=i.Month_id from inserted i;
select @yearid=i.year_id from inserted i;
select @total=COUNT(*) from [Deceased Register]
where Parish_id=@parishid and Year_id=@yearid and
Month_id=@monthid
print @total
select @gtotal=COUNT(*) from [Deceased Register]
where Parish_id=@parishid and Year_id=@yearid
print @total
if not exists(select parish_id from ParishStatistics
where parish_id=@parishid and Year_id=@yearid and month_id=@monthid)
begin
insert into ParishStatistics (Parish_id,Year_id,month_id,Deceased)
values (@parishid,@yearid,@monthid,@total)
end
else
begin
update ParishStatistics
set Deceased=@total
where Parish_id=@parishid and Year_id=@yearid and month_id=@monthid
end
update ParishStatistics
set AnnualDeceased=@gtotal
where Parish_id=@parishid and Year_id=@yearid
END
| |
doc_4538
|
I've tried a few different approaches such as For/Next loops, Do While/Loop, there doesn't seem to be an affect either way. The closest I have gotten in terms of research is that this may be slide refresh-related, but I haven't been able to figure out if that's true or not. Right now my delay is at .25 seconds, but if I extend it to 1 second, it works consistently. Unfortunately that isn't useful in this scenario because I'm attempting to create the element of randomization where the player doesn't know exactly where their color will stop; in other words, it needs to be fast.
Here's the main code and then the 2 references:
Sub YellowTeamPuzzleFlash()
Dim RandomNumber As Integer
Do While ActivePresentation.Slides(1).Shapes("YellowTeamLight").Visible = True
RandomNumber = Int((56 * Rnd) + 1)
ActivePresentation.Slides(1).Shapes("PuzzleFlash" & RandomNumber).Line.ForeColor.RGB = RGB(255, 255, 0)
delayTime3
playPing
ActivePresentation.Slides(1).Shapes("PuzzleFlash" & RandomNumber).Line.ForeColor.RGB = RGB(0, 0, 0)
Loop
End Sub
Sub delayTime3()
Dim PauseTime, Start
PauseTime = 0.25 ' Set duration in seconds
Start = Timer ' Set start time.
Do While Timer < Start + PauseTime
DoEvents ' Yield to other processes.
Loop
End Sub
Function playPing()
Call PlaySound("C:\KTResources\Ping.wav", 0&, &H1 Or &H20000)
End Function
True PPT animations are still very smooth so I know this isn't hardware related, this seems entirely related to my usage of the code.
| |
doc_4539
|
I have made a version without networking.
I have been able to send a small file up to 8 KB on a different version
My Program Is Function Based So The Program Branches Off From The Main Menu To Other Menues And Functions. Since There is A Bit Of Jumping, It Would Be Best To Show All The Code.
https://github.com/BaconBombz/Dencryptor/blob/Version-2.0/Dencryptor.py
The socket connects, and all required data is sent. Then, the file is AES encrypted and sent through the socket. The Receiving end writes encrypted data to a file and decrypts it. The program will say the file was sent but on the recieving end, the program spits out a struct error because the file that should have the encrypted data is empty.
A: The code is too non-minimal so here's a minimal example of downloading an unencrypted file. Also, TCP is a streaming protocol and using sleeps to separate your data is incorrect. Define a protocol for the byte stream instead. This is the protocol of my example:
*
*Open the connection.
*Send the UTF-8-encoded filename followed by a newline.
*Send the encoded file size in decimal followed by a newline.
*Send the file bytes.
*Close the connection.
Note this is Python 3 code since Python 2 is obsolete and support has ended.
server.py
from socket import *
import os
CHUNKSIZE = 1_000_000
# Make a directory for the received files.
os.makedirs('Downloads',exist_ok=True)
sock = socket()
sock.bind(('',5000))
sock.listen(1)
with sock:
while True:
client,addr = sock.accept()
# Use a socket.makefile() object to treat the socket as a file.
# Then, readline() can be used to read the newline-terminated metadata.
with client, client.makefile('rb') as clientfile:
filename = clientfile.readline().strip().decode()
length = int(clientfile.readline())
print(f'Downloading {filename}:{length}...')
path = os.path.join('Downloads',filename)
# Read the data in chunks so it can handle large files.
with open(path,'wb') as f:
while length:
chunk = min(length,CHUNKSIZE)
data = clientfile.read(chunk)
if not data: break # socket closed
f.write(data)
length -= len(data)
if length != 0:
print('Invalid download.')
else:
print('Done.')
client.py
from socket import *
import os
CHUNKSIZE = 1_000_000
filename = input('File to upload: ')
sock = socket()
sock.connect(('localhost',5000))
with sock,open(filename,'rb') as f:
sock.sendall(filename.encode() + b'\n')
sock.sendall(f'{os.path.getsize(filename)}'.encode() + b'\n')
# Send the file in chunks so large files can be handled.
while True:
data = f.read(CHUNKSIZE)
if not data: break
sock.sendall(data)
| |
doc_4540
|
I am developing a simple MVC application and I am using ajax to send data from a view to controller. For some reason, the controller only recognized the fist parameters and the rest are just nulls. I even tried to put fixed strings instead of variables but they still appear as null from the controller???
The view:
$.ajax({
type: "POST",
url: "../Home/AddItem",
data: "{ItemModel: 'ttt1', ItemName: 'ttt2'}",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function (data) {
console.log(JSON.stringify(data));
if (data.Success == "Success") {
alert("Item has been added.");
} else {
alert("We were not able to create the offer");
}
},
error: function (exception) {
console.log(exception);
}
});
On the Home controller, I have the below action:
[HttpPost]
public JsonResult AddItem(string ItemModel, string ItemName)//ItemName is always null??
{
try
{
_DB.Database.ExecuteSqlCommand(@"INSERT INTO ITEMS(iModel, iName) VALUES ({0}, {1})", ItemModel, ItemName);
return Json(new { Success = "Success" });
}
catch (Exception ex)
{
throw ex;
}
}
A: You are not sending the data correctly.
The code indicates JSON but is sending just a single string. If you inspect ItemModel, I am certain it will contain the string data sent from the client.
Create a JavaScript object and then stringify that as the body of the request.
var payload = { ItemModel: 'ttt1', ItemName: 'ttt2' }; //<-- create object
$.ajax({
type: "POST",
url: "../Home/AddItem",
data: JSON.stringify(payload), //<-- properly format for request
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function (data) {
console.log(JSON.stringify(data));
if (data.Success == "Success") {
alert("Item has been added.");
} else {
alert("We were not able to create the offer");
}
},
error: function (exception) {
console.log(exception);
}
});
The model binder should then be able to differentiate the desired parameters.
Ideally when expecting data in the body of a request it is better to use a model
public class Item {
public string ItemModel { get; set; }
public string ItemName { get; set; }
}
And have the action explicitly look for it in the body of the request using the FromBody attribute
[HttpPost]
public JsonResult AddItem([FromBody]Item item) {
if(ModelState.IsValid) {
try {
var sql = @"INSERT INTO ITEMS(iModel, iName) VALUES ({0}, {1})";
_DB.Database.ExecuteSqlCommand(sql, item.ItemModel, item.ItemName);
return Json(new { Success = "Success" });
} catch (Exception ex) {
throw ex;
}
}
return Json(new { Success = "BadRequest" });
}
| |
doc_4541
|
const link= document.createElement("a");
link.href = window.location.href;
emailDetails.body = "Here is an href: \r\n" + link;
Meteor.call("sendEmail", emailDetails.to, email, emailDetails.subject, emailDetails.body);
Where my Email method is
sendEmail: function (to, from, subject, html) {
check([to, from, subject, text], [String]);
this.unblock();
Email.send({
to: to,
from: from,
subject: subject,
html: html
});
But I'm having no luck. Source of the actual email message does show the anchor tag, but no href inside it.
I've also tried putting the html in a template and then compiling the template using
SRR.compileTemplate
and passing that result as my email body. But that doesn't work either. Any ideas on how to achieve this?
A: Turns out Gmail wouldn't render the anchor tag, but Outlook does. So it's an issue with how the email client treats embedded html.
A: Actually it's possible, just use plain tags; instead of document.createElement("a").
smtp = {
username: "server@gentlenode.com",
password: "3eeP1gtizk5eziohfervU",
server: "smtp.gmail.com",
port: 587
};
process.env.MAIL_URL =
"smtp://" +
encodeURIComponent(smtp.username) +
":" +
encodeURIComponent(smtp.password) +
"@" +
encodeURIComponent(smtp.server) +
":" +
smtp.port;
Email.send({
to: "duckduck@quack.com",
from: "mew2@gmail.com",
subject: "hello",
html: `<p><strong>This will render as bold text</strong>, but this will not.</p> Also, You can direct users to <a href="duckduckgo.com">duckduckgo</a>`
});
https://themeteorchef.com/tutorials/using-the-email-package
| |
doc_4542
|
I've set the max_connections to 250 (up from 151). But I'm confused on how I need to allocate RAM. The machine has 32GB of RAM and if I'm reading the results from mysqltuner correctly... I'm only allowing up to 1GB total to be used. But at 250 connections * 2.8M/thread it should only ever reach 700M + the global of 328M?
It looks like we've peaked at 755M. But with all this extra memory left over should I open things up a bit to let MariaDB breathe?
Am I reading this correctly?
This machine doubles as an apache & db server.
Even at full tilt I rarely see the machine use more than 3 or 4GB of total system RAM
I ran mysqltuner and here are the performance results:
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 34d 1h 32m 42s (74M q [25.213 qps], 26M conn, TX: 46G, RX: 42G)
[--] Reads / Writes: 98% / 2%
[--] Binary logging is disabled
[--] Physical Memory : 31.5G
[--] Max MySQL memory : 1.0G
[--] Other process memory: 647.4M
[--] Total buffers: 328.0M global + 2.8M per thread (250 max threads)
[--] P_S Max memory usage: 0B
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 755.5M (2.34% of installed RAM)
[OK] Maximum possible memory usage: 1.0G (3.20% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (0/74M)
[OK] Highest usage of available connections: 60% (152/250)
[OK] Aborted connections: 0.00% (2/26423553)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[!!] Query cache may be disabled by default due to mutex contention.
[OK] Query cache efficiency: 45.7% (21M cached / 47M selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 1% (541 temp sorts / 32K sorts)
[!!] Joins performed without indexes: 24537
[OK] Temporary tables created on disk: 1% (644 on disk / 51K total)
[OK] Thread cache hit rate: 98% (383K created / 26M connections)
[!!] Table cache hit rate: 0% (120 open / 36K opened)
[OK] Open file limit used: 0% (24/4K)
[OK] Table locks acquired immediately: 99% (5M immediate / 5M locks)
A: You have to understand that MySQLtuner's calculation of "Max MySQL Memory" is total bullshit.
It's based on the theoretical maximum — which is possible only if you max out max_connections, and then every connection runs a query at exactly the same moment, and every one of those queries uses the maximum possible sort buffers, read buffers, and join buffers. Realistically, this will never happen in a real server.
When I run MySQLTuner on most production database servers I have supported, the "Max MySQL Memory" is reported as hundreds of times larger than the actual physical RAM on the server. This is not a problem because it's a theoretical maximum that will never actually happen.
If you run SHOW PROCESSLIST on your database server, even if all 150 or 250 threads are connected, you'll see only 6-8 threads running a query, while most other threads are in a state of "Sleeping".
It's like if you are logged into a server with ssh and your terminal is sitting ready at a shell prompt. Are you running any command? No. Your shell is idle. But you're still connected.
The same is true of MySQL. Your application may be connected to the database, but your app isn't running a query yet. And even when it does run a query, it probably won't use the maximum resources allowed. And then it'll finish quickly and return the connection to an idle state again.
At any given instant, the number of threads connected may be high, even while those connections actually running a query is small. You can compare:
mysql> show global status like 'Threads%';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 510 |
| Threads_running | 13 |
+-------------------+-------+
The above are typical numbers from a busy production MySQL instance. In my experience, the ratio of threads connected to threads running ranges between 10:1 and 100:1.
But MySQLTuner calculations are not realistic — they assume the ratio of threads connected to threads running is 1:1.
So given that, the memory you allocate to MySQL has nothing to do with the max_connections you allow.
You may find that increasing some tuning options is helpful for performance, but it has more to do with the size of your data and the types of queries you run against that data.
I recommend reading https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/
Or if you want to get into deeper study, read: High Performance MySQL, 3rd Edition
| |
doc_4543
|
Therefore whenever a big enough page loads there's a lot of processing involved since every element of a class "w" has to be analyzed by the client to handle that tooltip. I wonder if there's a way to modify the code somehow to only do this for a specific element of a class "w" that is being clicked instead of looping through all of them at the beginning.
This is a very simple example from their page on which the slowdown is observed:
$(document).ready(function() {
$('.w').tooltipster({
trigger: 'click',
functionBefore: function(instance, helper) {
instance.content('My new content');
}
});
});
Is it really necessary to invoke a loop there (via a Tooltipster or something else) to make every element of class "w" ready for a certain function/task?
A: You can use delegation to create tooltips only when it's useful. It has its drawbacks in a few cases but it works well most of the time. You will find explanations on delegating for Tooltipster in the documentation: http://iamceege.github.io/tooltipster/#delegation
| |
doc_4544
|
What I have in mind is to load my data to a Redshift Cluster and run the updates on data persisted in Redshift and have a Lambda function to generate a new file using the updated data and replace the file in the S3 bucket. Is there a better way to accomplish this task?
| |
doc_4545
|
*
*add a file glibc/nptl/my_add.c, which return a sum of two numbers.
*modify the glibc/nptl/Makefile, Add the my_add to "libpthread-routines"
*make glibc and I get the pthread.so, but the func's bind value is LOCAL. How to change the bind value form LOCAL to GLOBAL?
$ readelf build/nptl/pthread.so.0 -s | grep "my_add"
326: 0000000000000000 0 FILE LOCAL DEFAULT ABS my_add.c
458: 0000000000013550 4 FUNC LOCAL DEFAULT 14 __my_add
510: 0000000000013550 4 FUNC LOCAL DEFAULT 14 my_add@@GLIBC_2.2.5
I think that maybe I am ought to modify the gcc flags in Makefile, but I don't know how to do it. The glibc is complicated for me, but I hava to slove it.
please help me, thanks!
my_add.c
#include "pthreadP.h"
#include <shlib-compat.h>
int __my_add (int a, int b)
{
return a+b;
}
versioned_symbol (libpthread, __my_add, my_add, GLIBC_2_1);
A:
but the func's bind value is LOCAL.
This is because GLIBC build process tightly controls the set of external symbols (so no undesirable or internal-only symbols are exported).
You must add my_add function to nptl/Versions file.
| |
doc_4546
|
Failed to find the necessary bits to build these modules:
_bsddb bsddb185 dl
imageop linuxaudiodev ossaudiodev
sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
Failed to build these modules:
_curses _curses_panel _ssl
I am most worried about the _ssl module. I used ./configure --with-ssl, as mentioned in another post, but the message is still the same. Any pointers appreciated.
Additional note: make used to complain that it could not build bz2 either, but I fixed that with this post entry recompiling bzip2. Now it's down to _ssl. I'm not sure if I need _curses.
Edit: Found make log file and it looks like this is due to the fact that python 2.6.5 supports ssl v2, while this support was removed in Ubuntu. Log file contains:
*** WARNING: renaming "_ssl" since importing it failed: build/lib.linux-x86_64-2./_ssl.so: undefined symbol: SSLv2_method
This blog has python 2.6.8 rebuilt without the ssl v2 support. I'm trying their changes in the 2.6.5 source now.
Edit 2: Modifying 2.6.5 sources as noted above and removing ssl v2 support fixed the problem with _ssl module not building. Also, here is a list of packages I tried installing earlier:
apt-get install libreadline-dev
apt-get install libssl-dev (already installed)
apt-get install libbz2-dev (already installed)
apt-get install build-essential (already installed)
apt-get install sqlite3
apt-get install tk-dev
apt-get install libsqlite3-dev
apt-get install libc6-dev (already installed)
apt-get install libgdbm-dev
apt-get install libncursesw5-dev
Here is full output from make:
running build
running build_ext
building '_curses' extension
gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict prototypes -I. -I/tmp/nimbula/Python-2.6.5/./Include -I. -IInclude -I./Include -I/usr/local/include -I/tmp/nimbula/Python-2.6.5/Include -I/tmp/nimbula/Python-2.6.5 -c /tmp/nimbula/Python-2.6.5/Modules/_cursesmodule.c -o build/temp.linux-x86_64-2.6/tmp/nimbula/Python-2.6.5/Modules/_cursesmodule.o
building '_curses_panel' extension
gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I/tmp/nimbula/Python-2.6.5/./Include -I. -IInclude -I./Include -I/usr/local/include -I/tmp/nimbula/Python-2.6.5/Include -I/tmp/nimbula/Python-2.6.5 -c /tmp/nimbula/Python-2.6.5/Modules/_curses_panel.c -o build/temp.linux-x86_64-2.6/tmp/nimbula/Python-2.6.5/Modules/_curses_panel.o
Failed to find the necessary bits to build these modules:
_bsddb bsddb185 dl
imageop linuxaudiodev ossaudiodev
sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
Failed to build these modules:
_curses _curses_panel
running build_scripts
Edit 3: Yay, thank you guys for asking these questions. When I looked at the packages I installed earlier, one was clearly not looking good, the libncursesw5-dev (since it has a version in it and I got it from an old post). I tried the following and it solved the problem of _curses and _curses_panel not building:
apt-get install libncurses-dev
After installing libncurses-dev, I executed: make clean, ./configure --with-ssl, make.
Now the output from make is:
running build
running build_ext
Failed to find the necessary bits to build these modules:
_bsddb bsddb185 dl
imageop linuxaudiodev ossaudiodev
sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
running build_scripts
A: Here is how I resolved pythong 2.6.5 installation on ubuntu 12.10:
1.) I tried to install the following libraries (some were already on the system):
apt-get install libreadline-dev
apt-get install libssl-dev (already installed)
apt-get install libbz2-dev (already installed)
apt-get install build-essential (already installed)
apt-get install sqlite3
apt-get install tk-dev
apt-get install libsqlite3-dev
apt-get install libc6-dev (already installed)
apt-get install libgdbm-dev
apt-get install libncurses-dev
2.) Issue with bz2 module not building:
a.) I downloaded bz2 source from http://www.bzip.org/downloads.html.
b.) Modified Makefile and changed cc=gcc to 'cc=gcc -fPIC`
following this post.
c.)
Executed make and make install.
d.) Tested bz2 with the following code from command line:
python -c "import bz2; print bz2.doc"
3.) Issue with _ssl module not building:
a.) Fixed ssl by removing ssl v2 from python source. Followed instructions in this blog by Michael Schurter. It worked like a charm.
4.) At this point I installed Python 2.6.5 using make altinstall, as to not overwrite the existing python. I pointed /usr/bin/python to my new python installation. Still a couple of things were missing.
5.) Added ez_setup:
curl -O https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
python ez_setup.py
6.) Added pip:
easy_install -U pip
7.) Installed setuptools:
pip install setuptools
At this point it looks like it's all working!
A: You haven't given us enough information to actually know what happened; build output and build logs exist for a reason…
But I can guess with about 80% confidence:
You don't have the right headers installed to build them.
For example, if you've installed the dpkg for libssl but not for libssl-dev, you won't be able to build _ssl. On Ubuntu, just sudo apt-get install libssl-dev and fix that. On different distros, it may be something like libssl-devel, ssl-dev, etc. But the basic concept of development pacakages is the same everywhere: to run a program that requires foo, you only need the foo package, but to build a program that requires foo, you need the foo development package as well.
For some of these libraries, it's not quite as obvious which package you're missing, but you should still be able to tell the name of the library or header file it couldn't find by looking at the logs, and you can search, or ask on an Ubuntu forum, to find out which package provides that missing file.
| |
doc_4547
|
a = 5
if a == 5:
pass #Do Nothing
else:
print "Hello World"
Is there a similar way to do this in C#?
A: Is pass used in the context of a loop? If so, use the continue statement:
for (var i = 0; i < 10; ++i)
{
if (i == 5)
{
continue;
}
Console.WriteLine("Hello World");
}
A: Either use an empty block as suggested in other answers, or reverse the condition:
if (a != 5)
{
Console.WriteLine("Hello world");
}
or more mechanically:
if (!(a == 5))
{
Console.WriteLine("Hello world");
}
A: Use empty braces.
int a = 5;
if (a == 5) {}
else {
Console.Write("Hello World");
}
A: A better question would be why you would want to do such a thing. If you're not planning on doing anything then leave it out, rather.
int a = 5;
if (a != 5) {
Console.Write("Hello World");
}
A: Why make it easy when you could do it unnecessarily difficult?
((Action) (() => { }))();
On a serious note, the C# equivalent of the Python pass statement would be
;
since it's a line of code that does nothing. While
{}
would achieve the same result, you're actually creating a scope containing no lines of code.
A: In case you don't want to use empty block, use
;
so the code should look like
int a = 5;
if (a == 5)
;
else
{
Console.Write("Hello World");
}
although, code readability still suffer.
A: Why not just say:
if (a != 5)
{
Console.Write("Hello World");
}
A: Empty block:
{}
| |
doc_4548
|
IS there a way to do it?
A: Built-in AFNetworking tools
For AFNetworking 1.x, use AFHTTPRequestOperationLogger.
For AFNetworking 2.x, use AFNetworkActivityLogger.
These tools both use the NSNotification broadcast by AFNetworking to log request and response data to the console. The amount of information to be displayed is configurable, and they can be configured to ignore certain operations.
Examination in Xcode without these tools
HTTP Requests (outgoing data)
If you want to examine the body of an outgoing request, look at the NSURLRequest's HTTPBody parameter, which is a property on your AFHTTPRequestOperation.
For example, in the method -[AFHTTPClient getPath:
parameters:
success:
failure:], after the request is made, you can type this into the debugger:
po [[NSString alloc] initWithData:request.HTTPBody encoding:4]
4 is NSUTF8StringEncoding, as defined in NSString.h.
The NSURLRequest's HTTPMethod parameter provides the method (GET, POST, PUT, etc.) as an NSString.
HTTP Responses (incoming data)
When your server responds, your success completion block is passed an AFHTTPRequestOperation object (called operation by default). You can:
*
*p (int)[[operation response] statusCode] to see the status code
*po [[operation response] allHeaderFields] to see the headers
*po [operation responseString] to see the response body
*po [operation responseObject] to see the response object (which may be nil if it couldn't be serialized)
A: As of AFNetworking 2.0, you should use AFNetworkActivityLogger
#import "AFNetworkActivityLogger.h"
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
#ifdef DEBUG
[[AFNetworkActivityLogger sharedLogger] startLogging];
[[AFNetworkActivityLogger sharedLogger] setLevel:AFLoggerLevelDebug];
#endif
return YES;
}
If you are using 3.0 and using CocoaPods, you will also need to pull AFNetworkActivityLogger from the appropriate branch:
pod 'AFNetworkActivityLogger', git: 'https://github.com/AFNetworking/AFNetworkActivityLogger.git', branch: '3_0_0'
A: You should have a look at https://github.com/AFNetworking/AFHTTPRequestOperationLogger with AFLoggerLevelDebug as level of debugging.
#import "AFHTTPRequestOperationLogger.h"
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
#ifdef DEBUG
[[AFHTTPRequestOperationLogger sharedLogger] startLogging];
[[AFHTTPRequestOperationLogger sharedLogger] setLevel:AFLoggerLevelDebug];
#endif
return YES;
}
@end
A: For AFNetworking 3.0 to be able to set the level of logging, you need the following:
#import <AFNetworkActivityLogger/AFNetworkActivityLogger.h>
#import <AFNetworkActivityLogger/AFNetworkActivityConsoleLogger.h>
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
AFNetworkActivityConsoleLogger *logger = [AFNetworkActivityLogger sharedLogger].loggers.anyObject;
logger.level = AFLoggerLevelDebug;
[[AFNetworkActivityLogger sharedLogger] startLogging];
return YES;
}
| |
doc_4549
|
Now I have another table called Logs table, this table records all the status changes that have happened for a particular product(referenced by refno) in a particular timeframe. Suppose the Product with refno. 5 was Publish on 1st October and Sold on 2nd October, The logs table will display as:
Refno
status_from
status_to
logtime
5
Stock
Publish
2021-10-01
5
Publish
Sold
2021-10-02
This is how my tables currently look like:
Listings table:('D'=>'Draft','N'=>'Action','Y'=>'Publish')
Logs Table which I'm getting using the following statement:
SELECT refno, logtime, status_from, status_to FROM (
SELECT refno, logtime, status_from, status_to, ROW_NUMBER() OVER(PARTITION BY refno ORDER BY logtime DESC)
AS RN FROM crm_logs WHERE logtime < '2021-10-12 00:00:00' ) r
WHERE r.RN = 1 UNION SELECT refno, logtime, status_from, status_to
FROM crm_logs WHERE logtime <= '2021-10-12 00:00:00' AND logtime >= '2015-10-02 00:00:00'
ORDER BY `refno` ASC
The logs table makes a new record every status change made and passes the current timestamp as the logtime, and the listings table changes/updates the status and updates its update_date. Now to get the total listings as of today I'm using the following statement:
SELECT SUM(status_to = 'D') AS draft, SUM(status_to = 'N') AS action, SUM(status_to = 'Y') AS publish FROM `crm_listings`
And this returns all the count data for status as of the current day.
Now this is where it gets confusing for me. So suppose today the count under action is 10 and tomorrow it'll be 15, and I want to retrieve the total that was present yesterday(10). So for this what I would've to do is take todays total(15) and subtract all the places where a product was changed to draft in between yesterday and today(Total count today in listing table - count(*) where status_to='Action' from logs table). Or vice versa, if yesterday it was 10 under action and today it is 5, it should add the values from the status_from column in logs table
Note: Refno isn't unique in my logs table since a product with the same refno can be marked as publish 1 day and unpublish another, but it is unique in my listings table.
Link to dbfiddle: https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=01cb3ccfda09f6ddbbbaf02ec92ca894
A: I am sure it can be simplifed or better. But its my query and logic :
*
*I found status_changes per refno's and calculated total changes from the desired day to present :
select status_logs, sum(cnt_status) to_add from (
SELECT
status_to as status_logs, -1*count(*) cnt_status
FROM logs lm
where
id = (select max(id) from logs l where l.refno = lm.refno) and
logtime >= '2021-10-01 00:00:00'
group by status_to
union all
SELECT
status_from, count(*) cnt_status_from
FROM logs lm
where
id = (select max(id) from logs l where l.refno = lm.refno) and
logtime >= '2021-10-01 00:00:00'
group by status_from ) total_changes
group by status_logs
*I matched the keys between listings table and logs table by converting listings table keys :
select
case status
when 'D' THEN 'Draft'
when 'A' THEN 'Action'
when 'Y' THEN 'Publish'
when 'S' THEN 'Sold'
when 'N' THEN 'Let'
END status_l ,COUNT(*) c
from listings
group by status
*I joined them and add the calculations to total sum of current data.
*I had to use full outer join , so i have one left and one right join with the same subqueries.
Lastly I used distinct , since it will generate same result for each joined query and used ifnull to bring the other tables status to the other column .
select distinct IFNULL(status_l, status_logs) status, counts_at_2021_10_01
from (select l.*,
logs.*,
l.c + ifnull(logs.to_add, 0) counts_at_2021_10_01
from (select case status
when 'D' THEN
'Draft'
when 'A' THEN
'Action'
when 'Y' THEN
'Publish'
when 'S' THEN
'Sold'
when 'N' THEN
'Let'
END status_l,
COUNT(*) c
from listings
group by status) l
left join (
select status_logs, sum(cnt_status) to_add
from (SELECT status_to as status_logs,
-1 * count(*) cnt_status
FROM logs lm
where id = (select max(id)
from logs l
where l.refno = lm.refno)
and logtime >= '2021-10-01 00:00:00'
group by status_to
union all
SELECT status_from, count(*) cnt_status_from
FROM logs lm
where id = (select max(id)
from logs l
where l.refno = lm.refno)
and logtime >= '2021-10-01 00:00:00'
group by status_from) total_changes
group by status_logs) logs
on logs.status_logs = l.status_l
union all
select l.*,
logs.*,
l.c + ifnull(logs.to_add, 0) counts_at_2021_05_01
from (select case status
when 'D' THEN
'Draft'
when 'A' THEN
'Action'
when 'Y' THEN
'Publish'
when 'S' THEN
'Sold'
when 'N' THEN
'Let'
END status_l,
COUNT(*) c
from listings
group by status) l
right join (
select status_logs, sum(cnt_status) to_add
from (SELECT status_to as status_logs,
-1 * count(*) cnt_status
FROM logs lm
where id = (select max(id)
from logs l
where l.refno = lm.refno)
and logtime >= '2021-10-01 00:00:00'
group by status_to
union all
SELECT status_from, count(*) cnt_status_from
FROM logs lm
where id = (select max(id)
from logs l
where l.refno = lm.refno)
and logtime >= '2021-10-01 00:00:00'
group by status_from) total_changes
group by status_logs) logs
on logs.status_logs = l.status_l) l
| |
doc_4550
|
The thing that I cannot understand why is in the following piece of code. Some of the field variables are declared "static" why?:
public class InvestmentFrame2 extends JFrame{
private static final int FRAME_WIDTH = 450;
private static final int FRAME_HEIGHT = 100;
private static final double DEFAULT_RATE = 5;
private static final double INITIAL_BALANCE = 1000;
private JLabel rareLabel;
private JTextField rateField;
private JButton button;
private JLabel resultLabel;
private double balance;
}
| |
doc_4551
|
How do I get the script to reference a separate worksheet called Process Steps where the text value is in cell C7 instead of entering the statement in the script as Step 1
ActiveSheet.Shapes.AddShape(msoShapeRectangle, 50, 50, 100, 50).Select
Selection.Formula = ""
Selection.ShapeRange.ShapeStyle = msoShapeStylePreset40
Selection.ShapeRange(1).TextFrame2.TextRange.Characters.Text = "Step1"
A: Peter has already mentioned how to pick up a value from another cell. Taking this a bit ahead.
Please avoid the use of .Select/.Activate INTERESTING READ
Is this what you are trying?
Sub Sample()
Dim shp As Shape
Set shp = ActiveSheet.Shapes.AddShape(msoShapeRectangle, 50, 50, 100, 50)
With shp.OLEFormat.Object
.Formula = ""
.ShapeRange.ShapeStyle = msoShapeStylePreset40
.ShapeRange(1).TextFrame2.TextRange.Characters.Text = _
ThisWorkbook.Sheets("Process Steps").Range("C7").Value
End With
End Sub
| |
doc_4552
|
If it's possible, what the steps to configure and use it?
I'm using joinFaces 3.2.4 with rewrite-spring-boot-starter.
A: Actually i found very very beautiful solution on this link By JSF itself
page-redirect
hopefully be useful
https://www.codenotfound.com/jsf-welcome-page-redirect-example-spring-boot.html
I sent alot of time on it with no hope with Spring boot, But if you want
you need to add this dependencies
<dependency>
<groupId>org.ocpsoft.rewrite</groupId>
<artifactId>rewrite-servlet</artifactId>
<version>3.4.2.Final</version>
</dependency>
<dependency>
<groupId>org.ocpsoft.rewrite</groupId>
<artifactId>rewrite-integration-faces</artifactId>
<version>3.4.2.Final</version>
</dependency>
<dependency>
<groupId>org.ocpsoft.rewrite</groupId>
<artifactId>rewrite-config-prettyfaces</artifactId>
<version>3.4.2.Final</version>
</dependency>
| |
doc_4553
|
A DNS entry was created for this custom URL. Trying to point the application at the URL results in: Cannot assign requested address. I've researched this error and haven't come across anything that helps me out in my situation.
Docker file:
FROM python:3.9-bullseye
ADD . /PIE
WORKDIR /PIE
RUN pip install -r requirements.txt
CMD ["python3", "app.py"]
Docker compose file:
version: "3"
services:
app:
build: .
command: python app.py
ports:
- "5060:5060"
volumes:
- .:/PIE
Code snippet:
from flask import Flask, url_for, render_template, request, redirect, session
from flask_wtf.csrf import CSRFProtect
from data import dat_a
app = Flask(__name__)
app.register_blueprint(dat_a)
csrf = CSRFProtect(app)
csrf.init_app(app)
app = Flask(__name__)
@app.route('/', methods=['GET'])
def index():
if session.get('logged_in'):
return render_template('home.html')
else:
return render_template('index.html')
if __name__ == '__main__':
app.secret_key = 'xxxx'
app.run(debug=True, host='mysite.org', port=5060)
I have tried the following:
if __name__ == '__main__':
app.secret_key = 'xxxx'
app.config['SERVER_NAME'] = 'mysite.org'
app.run()
============================================
if __name__ == '__main__':
app.secret_key = 'xxxx'
url = 'mysite.org:5060'
app.config['SERVER_NAME'] = url
app.run()
If I do something like:
if __name__ == '__main__':
app.secret_key = 'xxxx'
app.run(debug=True, host='localhost', port=5060)
It works just fine which makes me think the docker container is set only looking at the local side? I've also tried using gunicorn3 to create a WSGI server and push it out of a development server.
Environment:
*
*Nginx for a proxy manager
*Docker installed on a remote server (using portainer as a gui interface)
| |
doc_4554
|
this is the code with static data in html
HTML:-
<div class=" container-fluid news-slider">
<div class="row mySlides fad">
<div class=" col-xl-2 col-lg-2 col-md-2 col-sm-2 newsitem">
<mat-card class="insidecard newscard">
<img mat-card-image src="../../assets/img/download.jpg" class="newsimage">
<mat-card-content>
<div class="newsdetails">
The Shiba Inu is the smallest of the six original and distinct spitz breeds of dog from Japan.
A small, agile dog that copes very well with mountainous terrain, the Shiba Inu was originally
bred for hunting.
</div>
</mat-card-content>
</mat-card>
</div>
<div class=" col-xl-2 col-lg-2 col-md-2 col-sm-2 newsitem">
<mat-card class="insidecard newscard">
<img mat-card-image src="../../assets/img/download.jpg" class="newsimage">
<mat-card-content>
<div class="newsdetails">
The Shiba Inu is the smallest of the six original and distinct spitz breeds of dog from Japan.
A small, agile dog that copes very well with mountainous terrain, the Shiba Inu was originally
bred for hunting.
</div>
</mat-card-content>
</mat-card>
</div>
<div class=" col-xl-2 col-lg-2 col-md-2 col-sm-2 newsitem">
<mat-card class="insidecard newscard">
<img mat-card-image src="../../assets/img/download.jpg" class="newsimage">
<mat-card-content>
<div class="newsdetails">
The Shiba Inu is the smallest of the six original and distinct spitz breeds of dog from Japan.
A small, agile dog that copes very well with mountainous terrain, the Shiba Inu was originally
bred for hunting.
</div>
</mat-card-content>
</mat-card>
</div>
<div class=" col-xl-2 col-lg-2 col-md-2 col-sm-2 newsitem">
<mat-card class="insidecard newscard">
<img mat-card-image src="../../assets/img/download.jpg" class="newsimage">
<mat-card-content>
<div class="newsdetails">
The Shiba Inu is the smallest of the six original and distinct spitz breeds of dog from Japan.
A small, agile dog that copes very well with mountainous terrain, the Shiba Inu was originally
bred for hunting.
</div>
</mat-card-content>
</mat-card>
</div>
<div class=" col-xl-2 col-lg-2 col-md-2 col-sm-2 newsitem">
<mat-card class="insidecard newscard">
<img mat-card-image src="../../assets/img/download.jpg" class="newsimage">
<mat-card-content>
<div class="newsdetails">
The Shiba Inu is the smallest of the six original and distinct spitz breeds of dog from Japan.
A small, agile dog that copes very well with mountainous terrain, the Shiba Inu was originally
bred for hunting.
</div>
</mat-card-content>
</mat-card>
</div>
<div class=" col-xl-2 col-lg-2 col-md-2 col-sm-2 newsitem">
<mat-card class="insidecard newscard">
<img mat-card-image src="../../assets/img/download.jpg" class="newsimage">
<mat-card-content>
<div class="newsdetails">
The Shiba Inu is the smallest of the six original and distinct spitz breeds of dog from Japan.
A small, agile dog that copes very well with mountainous terrain, the Shiba Inu was originally
bred for hunting.
</div>
</mat-card-content>
</mat-card>
</div>
</div>
<a class="pre" (click)="plusSlides(-1)">❮</a>
<a class="nex" (click)="plusSlides(1)">❯</a>
</div>
CSS:-
.news-slider{
position: relative;
}
.mySlides{
display: none;
}
.pre,.nex{
cursor: pointer;
position: absolute;
top:50%;
width: auto;
padding: 16px;
margin-top: -22px;
color:red;
font-weight: bold;
font-size: 18px;
transition: 0.6s ease;
border-radius: 0 3px 3px 0;
user-select: none;
background-color:white;
box-shadow: 1px 2px 10px -1px rgba(0,0,0,.3);
}
.nex {
right: 0;
border-radius: 3px 0 0 3px;
margin-right: 0px;
}
.pre{
margin-left:-15px;
}
.fad {
-webkit-animation-name: fade;
-webkit-animation-duration: 1.5s;
animation-name: fade;
animation-duration: 1.5s;
}
Angular:-
export class MainpageComponent implements OnInit {
slideIndex = 1;
parent = document.getElementsByClassName("mySlides");
constructor(config : NgbCarouselConfig,public httpclient:HttpClient,private renderer:Renderer2) {
config.interval = 2000;
config.wrap = true;
config.keyboard = false;
config.pauseOnHover = true;
}
ngOnInit() {
this.showSlides(this.slideIndex);
}
showSlides(n)
{
var i;
if(n>this.parent.length)
{
this.slideIndex = 1;
}
if(n<1)
{
this.slideIndex = this.parent.length;
}
for(i=0;i<this.parent.length;i++)
{
this.renderer.setStyle(this.parent[i],'display','none');
}
this.renderer.setStyle(this.parent[this.slideIndex-1],'display','flex');
console.log(this.parent[0]);
}
plusSlides(n)
{
this.showSlides(this.slideIndex += n);
}
}
this is the code that i have used for dynamic display
HTML:-
```
<div class=" container-fluid news-slider">
<div class="row mySlides fad" *ngFor="let newsarray of newschunk">
<div class=" col-xl-2 col-lg-2 col-md-2 col-sm-2 newsitem" *ngFor="let item of newsarray">
<mat-card class="insidecard newscard">
<img mat-card-image [src]="item.img" class="newimage">
<mat-card-content>
<div class="newsdetails">
{{item.description}}
</div>
</mat-card-content>
</mat-card>
</div>
</div>
<a class="pre" (click)="plusSlides(-1)">❮</a>
<a class="nex" (click)="plusSlides(1)">❯</a>
</div>
CSS:-
.news-slider{
position: relative;
}
.mySlides{
display: none;
}
.pre,.nex{
cursor: pointer;
position: absolute;
top:50%;
width: auto;
padding: 16px;
margin-top: -22px;
color:red;
font-weight: bold;
font-size: 18px;
transition: 0.6s ease;
border-radius: 0 3px 3px 0;
user-select: none;
background-color:white;
box-shadow: 1px 2px 10px -1px rgba(0,0,0,.3);
}
.nex {
right: 0;
border-radius: 3px 0 0 3px;
margin-right: 0px;
}
.pre{
margin-left:-15px;
}
.fad {
-webkit-animation-name: fade;
-webkit-animation-duration: 1.5s;
animation-name: fade;
animation-duration: 1.5s;
}
ANGULAR:-
export class MainpageComponent implements OnInit {
slideIndex = 1;
parent = document.getElementsByClassName("mySlides");
public newsdata = [
{
title: 'Card Title 1',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 2',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 3',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 4',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 5',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 6',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 7',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 8',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
{
title: 'Card Title 9',
description: 'Some quick example text to build on the card title and make up the bulk of the card content',
buttonText: 'Button',
img: 'https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(34).jpg'
},
];
public newschunk:any=[[]];
constructor(config : NgbCarouselConfig,public httpclient:HttpClient,private renderer:Renderer2) {
config.interval = 2000;
config.wrap = true;
config.keyboard = false;
config.pauseOnHover = true;
}
ngOnInit() {
//this.changecol.send("yes");
this.getTopNews();
//console.log(this.newsdiv);
//console.log(this.parent[0]);
}
showSlides(n)
{
var i;
if(n>this.parent.length)
{
this.slideIndex = 1;
}
if(n<1)
{
this.slideIndex = this.parent.length;
}
for(i=0;i<this.parent.length;i++)
{
this.renderer.setStyle(this.parent[i],'display','none');
}
this.renderer.setStyle(this.parent[this.slideIndex-1],'display','flex');
console.log(this.parent[0]);
}
plusSlides(n)
{
this.showSlides(this.slideIndex += n);
}
getTopNews() {
this.httpclient.get<{message:any,errorMessage:string}>("http://localhost:3000/trendingNews").subscribe((responsedata)=>{
//this.newsdata=responsedata.message;
this.newschunk = this.getChunks(this.newsdata,6);
this.showSlides(this.slideIndex);
},(error)=>{
console.log(error);
this.renderer.setStyle(this.newsdiv[0],'display','none');
});
}
getChunks(arr,size)
{
let chunkarray = [];
for(let i=0;i<arr.length;i+=size)
{
chunkarray.push(arr.slice(i,i+size));
}
return chunkarray;
}
}
1st image with static data in html
2nd image with dynamic data from angular without sliding
3rd image when i click the next arrow
Behavior:
A: you had 2 things going on:
*
*Fetch data in newschuck
*Show the slides
Issue #1: You did both these tasks in ngOnInit - fetching data (point #1) is fine in OnInit, but Showing the slides (point #2) wouldn't work because ngOnInit is run before the page is rendered.
Issue #2: If you had put both these (point #1 & #2) in ngAfterViewInit - you get an error 'expression changed after it was checked..."
Solution: fetch data (point #1) in OnInit; display slides (point #2) after rendering of the page. To do this, i created a boolean variable (it'll help if in case you are getting data from a rest api).
check the complete demo from your githib here
EDIT (1) - to do this against a rest API, transfer the code you had inside ngAfterViewInit to the finally block () => { } as seen below:
getTopNews() { this.httpclient.get<{ message: any, errorMessage: string }>("localhost:3000/trendingNews").subscribe(
responsedata => { this.newsdata = responsedata.message;
this.newschunk = this.getChunks(this.newsdata, 3);
this.arrayUpdated = true;
}
,error => { console.log(error); this.renderer.setStyle(this.newsdiv[0], 'display', 'none'); });
/* this is the finally block */
() =>{ if (this.arrayUpdated) { this.showSlides(this.slideIndex); }}
}
| |
doc_4555
|
db.execSQL("CREATE VIRTUAL TABLE " + Msg._TABLE_NAME + " USING FTS3 ("
+ Msg._ID + " INTEGER, "
(...)
+ Msg.READ + " SHORT DEFAULT 0,"
+ Msg.URGENT + " SHORT DEFAULT 0"
+ ");");
Then I am trying to query this using parametrized query:
String[] columns = new String[] {Msg.ROWID, Msg.TITLE, Msg.READ, Msg.URGENT};
(...)
getContentResolver().query(Msg.CONTENT_URI, columns,
Msg.URGENT + "=? AND " + Msg.READ + "=?" + , whereArgs, null);
where whereArgs varies for each query:
String[] urgentUnread = new String[]{"1", "0"};
String[] regularUnread = new String[]{"0", "0"};
but no matter what it returns 0 results/rows even though data exist. The content provider does not change the params and the query returns nothing using QueryBuilder as well as when calling query "directly":
Cursor c = db.query(tables, columns, where, whereArgs, groupBy, having, orderBy, limit);
The query works if I do just String concat:
getContentResolver().query(Msg.CONTENT_URI, columns,
Msg.READ + "=0 AND " + Msg.URGENT + "=1", null, null);
but that seems to kill the purpose of param queries and is nasty to cache. Dalvik complains (after making lot of queries) that there is no space in cache for query and, ironically, tells me to use parametrized queries with '?'. I would love to, trust me :)
I know JavaDoc states that parameters are bound as StringS but I just simply can't believe that... because that would be major ...ahem, ... WTF
Where did I go wrong here?
Thanks in advance.
A: This is OP, I was researching and experimenting further and came to conclusion that there is FTS3 to blame. Since I need the data to be searchable by fulltext I was creating VIRTUAL TABLE USING FTS3 and then the parameters binding failed.
As I do not want to query shadow table (Msg_content) directly, my solution is to split data into 2 related tables:
db.execSQL("CREATE TABLE " + Msg._TABLE_NAME + " (" +
Msg._ID + PRIMARY_KEY_AUTOINC +
Msg.PRIORITY + " TEXT," +
Msg.RECEIVED + " INTEGER," +
Msg.MOBILE_STATUS + " INTEGER DEFAULT 0," +
Msg.READ + " SHORT DEFAULT 0," +
Msg.FLASH + " SHORT DEFAULT 0" +
");");
db.execSQL("CREATE VIRTUAL TABLE " + MsgText._TABLE_NAME + " USING FTS3 (" +
MsgText._ID + PRIMARY_KEY +
MsgText.TITLE + " TEXT," +
MsgText.CONTENT + " TEXT," +
MsgText.KEYWORDS + " TEXT," +
"FOREIGN KEY(" + MsgText._ID + ") " +
"REFERENCES " + Msg._TABLE_NAME + "(" + Msg._ID + ") " +
");");
Then I created View to use by queries:
db.execSQL("CREATE VIEW IF NOT EXISTS " + View.MSG_CONTENT +
" AS SELECT " +
Msg._TABLE_NAME + "." + Msg._ID + ", " +
Msg._TABLE_NAME + "." + Msg.READ + ", " +
Msg._TABLE_NAME + "." + Msg.FLASH + ", " +
(...)
MsgText._TABLE_NAME + "." + MsgText.TITLE + ", " +
MsgText._TABLE_NAME + "." + MsgText.CONTENT +
" FROM " + Msg._TABLE_NAME + ", " + MsgText._TABLE_NAME +
" WHERE " + Msg._TABLE_NAME + "." + Msg._ID + "=" +
MsgText._TABLE_NAME + "." + MsgText._ID);
This works very well for me as I can query data using parameters and do fulltext search when needed. Query performance is the same as when using just one table.
I hope this helps someone else who might bump into the same issue.
Cheers,
PeS
P.S. Checked Meta and it is OK to reply to self, apparently.
| |
doc_4556
|
I would like to have an entity for each day (called dailyGoal). Whenever user opens an app, the app first looks if an entity for this date was already created and creates a new one if it does not exist.
I am having some problems with time zones.
User creates dailyGoal entity in New York and travels to San Francisco in the same day (and vice-versa). I cannot just use midnight date to fetch existing entity because midnight dates are different in this case. I tried using time intervals but this is also not a good solution.
Thanks!
Matic
A: The entity can be based exclusively on year/month/day. Whenever a user opens the application, you extract year/month/day from the local date/time. If there exists an entity for year/month/day you use that, otherwise you make a new entity for the year/month/day.
In practice, this means that the entity will persist for 27 hours on a day trip from NYC to San Francisco, and for 21 hours on a day trip from San Francisco to NYC. But that aligns with the user's perception––the day seems to go by slower traveling from the east to the west of the USA because you gain 3 hours, and the day goes by faster when you travel from the west to the west to the east of the USA because you lose 3 hours.
A: I found out that I can easily solve the problem of timezones by normalizing daily goal dates to noon instead of midnight. That way, UTC date is always correct regardless of time zones.
| |
doc_4557
|
I'm using Content Editable = True, and it works when I comment out my handleItemEdit function, but when I turn it on, I can only insert one character at a time, forcing me to keep clicking to edit.
Clearly this problem stems from my function, but I can't seem to figure out why.
//Responsible for listening for an edit and updating my object with the new text.
function handleEditItem() {
$('.js-shopping-item').on('input', function(event) {
const itemIndex = getItemIndexFromElement(event.currentTarget); //assigning the index of the the editted item to itemIndex
const updatedItem = STORE.items[itemIndex];
updatedItem.name = event.currentTarget.innerHTML;
renderShoppingList();
});
}
//Returns the index of an Item in the Store
function getItemIndexFromElement(item) {
const itemIndexString = $(item)
.closest('.js-item-index-element')
.attr('data-item-index');
return parseInt(itemIndexString, 10);
}
//Function responsible for returning the template HTHML to insert into the html.
function generateItemElement(item) {
let itemIndex = STORE.items.indexOf(item);
return `
<li class="js-item-index-element" data-item-index="${itemIndex}">
<span contentEditable='true' class="shopping-item js-shopping-item ${item.checked ? 'shopping-item__checked' : ''}">${item.name}</span>
<div class="shopping-item-controls">
<button class="shopping-item-toggle js-item-toggle">
<span class="button-label">check</span>
</button>
<button class="shopping-item-delete js-item-delete">
<span class="button-label">delete</span>
</button>
</div>
</li>`;
}
| |
doc_4558
|
try {
MediaController mc = new MediaController(this);
MediaPlayer mMediaPlayer = new MediaPlayer();
mMediaPlayer.prepare();
mMediaPlayer.start();
mMediaPlayer.seekTo(0);
}
catch(Exception e) {
e.printStackTrace();
}
How can show MediaController and to control mMediaPlayer?
| |
doc_4559
|
Problem Statement: How do I pass the aggregate values calculated (on the fly) in UserControl2 to UserControl1. Both are leveraging the same ViewModel (DataContext for both set to CommonViewModel)
More Technical Details:
*
*The Aggregate values are all calculated in XAML (using aggregate functions of a 3rd party control - in this case Telerik)
*I understand that I should/would have public properties on my ViewModel which will hold these values (being set from UC2 upon loading/calculation)
*The Aggregate values are not displayed in the UC2 hence cannot use the simple 2 way binding mode (to set the public property on the ViewModel)
*This is not a telerik related specific question
Specific questions:
*
*How do I set a value in the ViewModel for a value which is not displayed in UC2 ?
*Is this overall approach right - wherein I am using a public property on ViewModel as a storage mechanism to propagate values within 2 user controls. I dont have any specific requirement to store these values to a persistent store (such as DB) at any point of time.
Edit - here is the code snippet -
<telerik:RadGridView
Name="ABC"
ItemsSource="{Binding appstats}"
AutoGenerateColumns="False">
<telerik:RadGridView.Columns>
<telerik:GridViewDataColumn
DataMemberBinding="{Binding Name}"
Header="Name"
TextAlignment="Justify"
IsFilterable="False">
<!-- Telerik Count Function to get total number of Names -->
<telerik:GridViewDataColumn.AggregateFunctions>
<telerik:CountFunction FunctionName="AppCount" />
</telerik:GridViewDataColumn.AggregateFunctions>
<!--End Region -->
</telerik:GridViewDataColumn>
</telerik:RadGridView.Columns>
Within the same usercontrol (UC2.xaml) today - I do the following:
<StackPanel Orientation="Horizontal" Margin="10" Grid.Row="0">
<TextBlock Text="Total number of Names:" />
<TextBlock Text="{Binding AggregateResults[\AppCount\].FormattedValue, ElementName=ABC}" />
</StackPanel>
This second section of the code is what I want to move to UC1. As suggested below (and had tried it already) - I can hide this element here and have it set a value in VM but was trying to avoid that approach.
Really appreciate any insight on approach and specific code snippet.
Thanks
| |
doc_4560
|
I'm struggling to get my colorscheme to work nicely with various filetypes. It would be handy to have a command that prints out the current color-group for the text at the current cursor position.
for example ([X] marks the cursor position:
def foobar
@some[X]thing = "foo"
end
would print out "Identifier" (If I'm right about that one ;-))
Is anything like that possible?
Or do you have any other recommendation how to "solve" the problem of identifying the right color groups to use?
A: See here: Find out to which highlight-group a particular keyword/symbol belongs in vim
In addition, there's even a ready-to-use plugin for that: SyntaxAttr.vim
| |
doc_4561
|
public abstract class Creature<T> where T : new() {
protected Creature()
{
Classification = new T();
}
public abstract T Classification { get; protected set; }
}
public class Dog : Creature<Animal>
{
public override Animal Classification { get; protected set; }
}
public class Animal{
public void AnimalSpecificMethod() { }
}
How can I solve this prolbem? Maybe an idea to re-design this structure?
What I would like to achieve is to create the same class type in Dog class what contains the Classification as it has declared through the Creature class.
Thank you!
A: The problem is that this call, in the constructor, is a virtual call to the setter:
Classification = new T();
You could instead add a field backed property, which would avoid the issue:
protected Creature()
{
_classification = new T();
}
private T _classification;
public virtual T Classification
{
get { return _classification; }
protected set { _classification = value; }
}
This does seem like a strange design, but I can't offer any specific design advice without more details of what you're trying to achieve. The Dog / Classification example is too general for specific suggestions. The above change will get around your immediate problem.
| |
doc_4562
|
The filter is selected using checkboxes.
My issue:
On page load there is a useEffect that changes every checkbox to false. This is based on the props coming in from the API.
I'd like on page load (and when the filter opens) that the checkbox state is stored based on what the user has selected previously in their session
code:
Filter component*
[...]
import FilterSection from "../FilterSection";
const Filter = ({
open,
handleClose,
setFilterOptions,
[..]
roomNumbers,
}) => {
const [roomValue, setRoomValue] = React.useState();
const [roomListProp, setRoomListProp] = React.useState(); // e.g. [["roomone", false], ["roomtwo", true]];
const sendRoomFilterData = (checkedRoomsFilterData) => {
setRoomValue(checkedRoomsFilterData);
};
const setCheckboxListPropRoom = (data) => {
setRoomListProp(data);
};
// extract, convert to an object and pass back down? or set local storage and get
// local storage and pass back down so that we can get it later?
const convertToLocalStorageFilterObject = (roomData) => { // []
if (roomData !== undefined) {
const checkedRooms = roomData.reduce((a, curval) => ({ ...a, [curval[0]]: curval[1] }), {});
localStorage.setItem("preserved", JSON.stringify(checkedRooms)); // sets in local storage but values get wiped on page load.
}
};
React.useEffect(() => {
const preservedFilterState = convertToLocalStorageFilterObject(roomListProp);
}, [roomListProp]);
const applyFilters = () => {
setFilterOptions([roomValue]);
handleClose();
};
const classes = CurrentBookingStyle();
return (
<Dialog
fullWidth
maxWidth="sm"
open={open}
onClose={() => handleClose(false)}
>
<DialogTitle>Filter By:</DialogTitle>
<DialogContent className={classes.margin}>
<FilterSection
filterName="Room number:"
filterData={roomNumbers}
setFilterOptions={sendRoomFilterData}
setCheckboxListProp={setCheckboxListPropRoom}
/>
</DialogContent>
<DialogActions>
<Button variant="contained" onClick={applyFilters}>
Apply Filters
</Button>
</DialogActions>
</Dialog>
);
};
Filter Section used in Filter
import {
TableCell,
Typography,
FormControlLabel,
Checkbox,
FormGroup,
} from "@material-ui/core";
const FilterSection = ({
filterData, filterName, setFilterOptions, setCheckboxListProp
}) => {
const [checkboxValue, setCheckboxValue] = React.useState({});
const [checkboxFilterList, setCheckboxFilterList] = React.useState([]);
const handleCheckboxChange = (event) => {
setCheckboxValue({
...checkboxValue,
[event.target.name]: event.target.checked, // room1: true
});
};
const = () => filterData // ["room1" "room2"]; comes from API
.filter((val) => !Object.keys(checkboxValue).includes(val))
.reduce((acc, currval) => ({
...acc, [currval]: false, // converts array to object and sets values to false
}), checkboxValue);
React.useEffect(() => {
const transformedCheckboxListItems = Object.entries(convertToObject());
setCheckboxFilterList(transformedCheckboxListItems);
setFilterOptions(transformedCheckboxListItems.filter(([, val]) => val).map(([key]) => key));
setCheckboxListProp(transformedCheckboxListItems);
}, [checkboxValue]);
return (
<>
<Typography style={{ fontWeight: "bold" }}>{filterName}</Typography>
<FormGroup row>
{checkboxFilterList.map(([key, val]) => (
<TableCell style={{ border: 0 }}>
<FormControlLabel
control={(
<Checkbox
checked={val}
onChange={handleCheckboxChange}
name={key}
color="primary"
/>
)}
label={key}
/>
</TableCell>
))}
</FormGroup>
</>
);
};
What i have tried:
I have created a reusable component called "FilterSection" which takes takes data from the API "filterData" and transforms it from an array to an object to set the initial state for the filter checkboxes.
On page load of the filter I would like the checkboxes to be true or false depending on what the user has selected, however this does not work as the convertToObject function in my FilterSection component converts everything to false again on page load. I want to be able to change this but not sure how? - with a conditional?
I have tried to do this by sending up the state for the selected checkboxes to the Filter component then setting the local storage, then the next step would be to get the local storage data and somehow use this to set the state before / after page load. Unsure how to go about this.
Thanks in advance
A: I am not sure if I understand it correctly, but let me have a go:
I have no idea what convertToObject does, but I assume it extracts the saved filters from localStorage and ... updates the filter value that has just been changed?
Each time the FilterSection renders for the first time, checkboxValue state is being initialised and an useEffect runs setCheckboxListProp, which clears the options, right?
If this is your problem, try running setCheckboxListProp directly in the handleCheckboxChange callback rather than in an useEffect. This will ensure it runs ONLY after the value is changed by manual action and not when the checkboxValue state is initialised.
A: I solved my problem by moving this line:
const [checkboxValue, setCheckboxValue] = React.useState({});
outside of the component it was in because every time the component re-rendered it ran the function (convertToObject() which reset each checkbox to false
by moving the state for the checkboxes up three layers to the parent component, the state never got refreshed when the or component pop up closed. Now the checkbox data persists which is the result I wanted.
:D
| |
doc_4563
|
App Main Activity
User clicks item to open market place
Market place opens
User hits home and does some other stuff.
User re-opens the app and it takes the user to the last activity in the stack which is the market place.
I want it to go instead
App Main Activity
User clicks item to open market place
Market place opens
User hits home and does some other stuff.
User re-opens the app and it returns to the apps main activity.
Now I could do this in code if the market place activity was part of my app but it's not so I'm a bit stuck.
Thanks a lot.
A: Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse(url));
browserIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
The key is the FLAG_ACTIVITY_NEW_TASK. This flag is generally used by activities that want to present a "launcher" style behavior: they give the user a list of separate things that can be done, which otherwise run completely independently of the activity launching them.
| |
doc_4564
|
set.seed(42)
n <- 100
dat1 <- data.frame(id=1:n,
treat = factor(sample(c('Trt','Ctrl'), n, rep=TRUE, prob=c(.5, .5))),
time = factor("T1"),
outcome1=rbinom(n = 100, size = 1, prob = 0.3),
st=runif(n, min=24, max=60),
qt=runif(n, min=.24, max=.60),
zt=runif(n, min=124, max=360)
)
dat2 <- data.frame(id=1:n,
treat = dat1$treat,
time = factor("T2"),
outcome1=dat1$outcome1,
st=runif(n, min=34, max=80),
qt=runif(n, min=.44, max=.90),
zt=runif(n, min=214, max=460)
)
dat3 <- data.frame(id=1:n,
treat = dat1$treat,
time = factor("T3"),
outcome1=dat1$outcome1,
st=runif(n, min=44, max=90),
qt=runif(n, min=.74, max=1.60),
zt=runif(n, min=324, max=1760)
)
dat <- rbind(dat1,dat2, dat3)
ggplot(dat,aes(x=mean(zt), y=time)) + geom_point(aes(colour=treat)) + coord_flip() + geom_line(aes(colour=treat))
I have three questions
*
*can a line be added connecting T1 to T2 to T3 showing the trend
*can the 95%CI for the mean be added to each point without having to calculate a "ymin" and "ymax" for all my response variables
*if I have multiple response variables (in this example "st", "qt" and "zt") is there a way to produce these all at one as some sort of facet?
A: Pivot_longer should do most of what you need. Pivot your st, qt, and zt (and whatever other response variables you need). Here I've labeled them "response_variables" and their values as value. You can then facet_wrap by response_variable. Stat_summary will add a line and the mean and ci (se), after group and color by treat. I opted for scales = "free" in facet_wrap otherwise you won't see much going on as zt dominates with its larger range
library(dplyr)
library(ggplot2)
library(Hmisc)
library(tidyr)
dat %>%
pivot_longer(-(1:4), names_to = "response_variables") %>%
ggplot(.,aes(x=value, y=time, group = treat, color = treat)) +
facet_wrap(~response_variables, scales = "free") +
coord_flip() +
stat_summary(fun.data = mean_cl_normal,
geom = "errorbar") +
stat_summary(fun = mean,
geom = "line") +
stat_summary(fun = mean,
geom = "point")
| |
doc_4565
|
Here are my codes:
1.shader.vert
#version 400
in vec3 VertexPosition;
in vec3 VertexColor;
out vec3 Color;
void main()
{
Color = VertexColor;
gl_Position = vec4(VertexPosition, 1.0);
}
2.shader.frag
#version 400
in vec3 Color;
out vec4 FragColor;
void main(){
FragColor = vec4(Color, 1.0);
}
3.main.cpp
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(500, 500);
glutCreateWindow("Project1");
glutDisplayFunc(render);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, 100, 0, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_POINT_SMOOTH);
// init glew
GLenum err = glewInit();
if (GLEW_OK != err){
printf("Error: %s\n", glewGetErrorString(err));
}
else{
printf("OK: glew init.\n");
}
// check gl version
const GLubyte *renderer = glGetString(GL_RENDERER);
const GLubyte *vendor = glGetString(GL_VENDOR);
const GLubyte *version = glGetString(GL_VERSION);
const GLubyte *glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION);
GLint major, minor;
glGetIntegerv(GL_MAJOR_VERSION, &major);
glGetIntegerv(GL_MINOR_VERSION, &minor);
printf("GL Vendor : %s\n", vendor);
printf("GL Renderer : %s\n", renderer);
printf("GL Version (string) : %s\n", version);
printf("GL Version (integer): %d.%d\n", major, minor);
printf("GLSL Version: %s\n", glslVersion);
// vertex shader
GLuint vertShader = createAndCompileShader("shader.vert", VERTEX);
// fragment shader
GLuint fragShader = createAndCompileShader("shader.frag", FRAGMENT);
// program
GLuint programHandle = glCreateProgram();
if (programHandle == 0)
{
printf("Error creating program object.\n");
}
glAttachShader(programHandle, vertShader);
glAttachShader(programHandle, fragShader);
glLinkProgram(programHandle);
GLint status;
glGetProgramiv(programHandle, GL_LINK_STATUS, &status);
if (GL_FALSE == status){
printf("Failed to link shader program");
}
else{
printf("OK\n");
glUseProgram(programHandle);
}
glutMainLoop();
return EXIT_SUCCESS;
}
I create and compile the shader in createAndCompileShader and the status of the compilation is success.
And I draw a triangle in render function.
void render()
{
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glBegin(GL_TRIANGLES);
glColor3f(1.0, 0.0, 0.0);
glVertex2f(20, 20);
glColor3f(0.0, 1.0, 0.0);
glVertex2f(80, 20);
glColor3f(0.0, 0.0, 1.0);
glVertex2f(50, 80);
glEnd();
glPopMatrix();
glutSwapBuffers();
}
The status of link is also success. However, there's nothing drawn in the window. I'm sure the function render is right.
Is there something wrong?
A: Is your triangle visible if you do not bind any GLSL shaders?
*
*try glDisable(GL_CULL_FACE);
*if it helps reorder the glVertex2f calls (different polygon winding)
What graphics card and driver you have?
*
*for OpenGL+GLSL+Windows
*Intel is almost unusable (especially for advanced things)
*ATI/AMD are usually almost OK these days (was much worst with old ATI drivers)
*nVidia are usually without any problems
You do not apply any projection or model view matrix In the vertex shader
*
*your modelview is identity so it does not matter
*but projection is not
*that means you are passing non transformed coordinates to the fragment shader
*OpenGL coordinates are usually in range <-1,+1>
*and your untransformed triangle does not cover that range
*so change the triangle coordinates to that range for example
*(-0.5,-0.5),(+0.5,-0.5),(0.0,+0.5)
*and try render (also you can temporarily rem the //gluOrtho2D(0, 100, 0, 100); just to be sure
*if it helps then that is the reason
*change your vertex shader to include: gl_Position = ftransform();
*In case of core profile is that not an option anymore
*so you should pass your transform matrices via uniform variables
*and multiply usually inside vertex shader
*see simple GLSL engine example
*it have texture,normal maping,3 lights,...
*and see Understanding homogenous 4x4 transform matrices
[edit1] you are using glVertex instead VAO so you should use compatibility profile
// vertex shader
#version 400 compatibility
out vec3 Color;
void main()
{
Color = gl_Color.rgb;
gl_Position = ftransform();
}
*
*that should do the trick ...
| |
doc_4566
|
It was jar packaging and working well.
I just changed packaging to war and deployed to jBoss EAP7.Now i am getting "Source 'D:\JBoss\EAP-7.0.0\bin\content\eArsiv.war\WEB-INF\lib\lept4j-1.2.3.jar\win32-x86-64' does not exist" error and TessAPI cant be initialized.
Normally there is no folder as content under bin folder but anyway i have created subfolders under bin folder but still no chance.
How can i handle this case?
Many thanks in advance!
Cheers,
Murat
| |
doc_4567
|
Thanks
A: Go through with this url :-
https://github.com/typicode/json-server.
A: I stumbled to and used SyncAdapter and json-server to firebase and I treated android like a web-app
I noticed that the guide used uri and json parsing so I imported my fake rest db.json to firebase and used the link on https://console.firebase.google.com/project/new-firebase-app/database/data
...
final String rest = "https://new-firebase-app.firebaseio.com/people.json";
// Parse the pretend json news feed
String jsonFeed = download(rest);
JSONArray jsonPeople = new JSONArray(jsonFeed);
...
This might help.
| |
doc_4568
|
I am looking to plot treatment effect by subgroup. I believe this was asked before, but without response (How to create forest plots of subgroups by treatment (ggforest))
So for example, using the 'colon' dataset, in the below code sex and rx are treated as separate predictors for status.
require("survival")
library(survminer)
model <- coxph( Surv(time, status) ~ sex + rx + adhere,
data = colon )
ggforest(model)
However, I would like to see if there are different effects of rx on status stratified by sex.
Meaning, I would like to generate a forest plot where for female and for male sex, there will be three plots each - one for each treatment arm.
Simply adding an interaction term does not seem to make any difference.
I tried instead to create an indicator variable instead, but this didn't seem to help either.
library(tidyverse)
colon <- colon %>%
mutate(indicator = factor(case_when(sex==0 & rx=="Obs" ~ "Female-Obs",
sex==0 & rx=="Lev" ~ "Female-Lev",
sex==0 & rx=="Lev+5FU" ~ "Female-FU",
sex==1 & rx=="Obs" ~ "Male-Obs",
sex==1 & rx=="Lev" ~ "Male-Lev",
sex==1 & rx=="Lev+5FU" ~ "Male-FU"), levels=c("Female-Obs", "Female-Lev", "Female-FU", "Male-Obs", "Male-Lev", "Male-FU")))
model <- coxph(Surv(time, status) ~ indicator, data=colon)
ggforest(model)
I really appreciate your help!
| |
doc_4569
|
Now I need to charge the usage of the Addon with a monthly fee (providing 7 days trial) but I cannot figure out the flow I must follow.
I guess that I have to call Recurring Application Charge API after the user authenticates (at the callback from the authorization URL) and if the user declines the charge then I have to log him out. Is that correct?
Do I have to care for providing 7 days trial by myself or the Shopify RACharge API with manage it by itself?
What do I have to do when the user cancel the subscription or freeze it?
Can anyone point me out a document that analyze the charging flow except the Shopify Billing API which is not that helpful after all?
A: First question, you're right. You either have to log out the user or you just let him authed without permission to see some views or all of them.
There's a propperty on Recurring Application which allows you to define the days of the trial: "Number of days that the customer is eligible for a free trial.", so Shopify handles this for you.
When user cancels a subscription or his store is put on "frozen" status, you should make sure he doesn't have access to your App functionalities, though if it's frozen, he won't even be able to log into his store.
| |
doc_4570
|
I was thinking of solving this issue using a for_each in conjuction with a dynamic block, as in the related SE question:
Main.tf
resource "azurerm_storage_share" "storage_share" {
for_each = var.storage_share_map
name = each.key
storage_account_name = azurerm_storage_account.sa.name
quota = each.value.quota
dynamic "acl" {
for_each = each.value.acl
content {
id = acl.value.id
access_policy {
permissions = acl.value.access_policy.permissions
start = acl.value.access_policy.start
expiry = acl.value.access_policy.expiry
}
}
}
The variable would be defined as:
variable "storage_share_map" {
type = map(object({
quota = number,
acl = object({
id = string,
access_policy = object({
expiry = string,
permissions = string,
start = string
})
}),
}))
default = {}
}
and later parametrized in my tests as:
storage_share_map = {
my-share-2 = {
quota = 123,
acl = {
id = "a-id",
access_policy = {
expiry = "ISO8061 UTC TIME"
permissions = "rwdl"
start = "ISO8601 UTC TIME"
},
},
}
However, when testing, terraform returns the following output:
Error: Unsupported attribute
on .terraform\modules\sa\main.tf line 83, in resource "azurerm_storage_share" "storage_share":
83: id = acl.value.id
|----------------
| acl.value is object with 3 attributes
This object does not have an attribute named "id".
Error: Unsupported attribute
on .terraform\modules\sa\main.tf line 83, in resource "azurerm_storage_share" "storage_share":
83: id = acl.value.id
|----------------
| acl.value is "a-id"
This value does not have any attributes.
Error: Unsupported attribute
on .terraform\modules\sa\main.tf line 86, in resource "azurerm_storage_share" "storage_share":
86: permissions = acl.value.access_policy.permissions
|----------------
| acl.value is object with 3 attributes
This object does not have an attribute named "access_policy".
Error: Unsupported attribute
on .terraform\modules\sa\main.tf line 86, in resource "azurerm_storage_share" "storage_share":
86: permissions = acl.value.access_policy.permissions
|----------------
| acl.value is "a-id"
This value does not have any attributes.
As I understand it, the issue here is that the for_each inside the dynamic block is either malformed or misbehaving: acl.value appears to be both valued as the string "a-id" and carrying three attribute (?).
Terraform version 0.12.26
Azurerm version 2.26.0
Any insight would be appreciated.
Related question:
Dynamic block with for_each inside a resource created with a for_each
A: By iterating in the dynamic block with for_each = each.value.acl, you are iterating over the values in the object type. It appears you really want to iterate over the acl themselves. You would need to adjust your type to:
variable "storage_share_map" {
type = map(object({
quota = number,
acl = list(object({
...
}))
})),
}
You can tell from the error messages that currently it is iterating over id and then access_policy, and failing to find the two requested attributes for each, which is why you have 2*2=4 errors.
You can adjust your input correspondingly to:
storage_share_map = {
my-share-2 = {
quota = 123,
acl = [{
id = "a-id",
access_policy = {
expiry = "ISO8061 UTC TIME"
permissions = "rwdl"
start = "ISO8601 UTC TIME"
},
}],
}
and this will achieve the behavior you desire.
Note that Terraform 0.12 has issues sometimes with nested object type specifications, so omitting the acl with [] may result in crashing under certain circumstances.
A: Please use square brackets for each.value.acl.
Azure storage share block should look like:
resource "azurerm_storage_share" "storage_share" {
for_each = var.storage_share_map
name = each.key
storage_account_name = azurerm_storage_account.sa.name
quota = each.value.quota
dynamic "acl" {
for_each = [each.value.acl]
content {
id = acl.value.id
access_policy {
permissions = acl.value.access_policy.permissions
start = acl.value.access_policy.start
expiry = acl.value.access_policy.expiry
}
}
}
}
| |
doc_4571
|
A: If you are using windows systems, you can restrict the access by using firewall settings using the inbound rules.
This may help your with your rules configuration
http://www.it.cornell.edu/services/firewall/howto/windows/tsp/inbound.cfm
| |
doc_4572
|
For example, say I want to start playing music from Google Play Music programmatically (ie: without leaving my app and launching Google Play Music). This is what I've tried:
ComponentName myEventReceiver = new ComponentName("com.google.android.music", "com.google.android.music.MediaButtonIntentReceiver");
AudioManager myAudioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
myAudioManager.registerMediaButtonEventReceiver(myEventReceiver);
// build the PendingIntent for the remote control client
Intent mediaButtonIntent = new Intent(Intent.ACTION_MEDIA_BUTTON);
mediaButtonIntent.setComponent(myEventReceiver);
PendingIntent mediaPendingIntent = PendingIntent.getBroadcast(getApplicationContext(), 0, mediaButtonIntent, 0);
// create and register the remote control client
RemoteControlClient myRemoteControlClient = new RemoteControlClient(mediaPendingIntent);
myAudioManager.registerRemoteControlClient(myRemoteControlClient);
I get this error, which makes sense:
registerMediaButtonEventReceiver() error: receiver and context package names don't match.
Is it possible to do some other way?
A: My app playlist manager is able to launch google music. As long as google music has picked up the playlist itself through ascan, it plays the playlist. I have found though that it treats the playorder not as numeric but as text so 1, 10, 11,2.
public void playSelectedPlaylist(String playlistname) {
Intent intent = new Intent( MediaStore.INTENT_ACTION_MEDIA_PLAY_FROM_SEARCH);
intent.putExtra(MediaStore.Audio.Playlists.ENTRY_CONTENT_TYPE, "android.intent.extra.playlist" );
intent.putExtra(SearchManager.QUERY, playlistname);
intent.putExtra(MediaStore.EXTRA_MEDIA_FOCUS, "vnd.android.cursor.item/playlist");
if (intent.resolveActivity(getPackageManager()) != null) {
startActivity(intent);
}else{
doToast("Sorry, no app was found to service this request", context);
}
}
The intent is INTENT_ACTION_MEDIA_PLAY_FROM_SEARCH and it finds all those apps that have this in their manifest.xml file. So far so good as Poweramp shows up.
Have a look at Playlist Manager by theoklink.
Google Play Music does play the playlist without any problems.
| |
doc_4573
|
char *get_file_type(char *path, char *filename)
{
FILE *fp;
char command[100];
char file_details[100];
char *filetype;
sprintf(command, "file -i %s%s", path, filename);
fp = popen(command, "r");
if (fp == NULL) {
printf("Failed to run command\n" );
exit(1);
}
while (fgets(file_details, sizeof(file_details)-1, fp) != NULL) {
filetype = (strtok(strstr(file_details, " "), ";"));
}
pclose(fp);
return filetype;
}
here instead of declaring command[], can I use *command? I tried to use it, but it throwed an exception. we dont need to free up variables declared like command[]? if yes how?
A: You can use char *command;, but then, you must allocate some memory for commandto refer to with a call to malloc() and when you are done ith that memory, it has to be freed again with a call to free().
As you can see, that is a lot more work than using a fixed-size array (as you do now), but it can be made a lot safer as well, because you could create a buffer of exactly the right size, instead of hoping that the total length of the command won't exceed 100 characters.
Aside from that, your code has a problem: The filetype pointer that the function returns points to a location within the array file_details, but that array will be cleaned up by the compiler when executing the return statement, so the pointer that gets returned by the function refers to some memory that is marked as "free to be used for other purposes".
If it is not a problem that the result of get_file_type is only valid for one file at a time, you can declare the file_details array as static, so that it will be preserved across calls to the function.
A: When you declare an array:
char command[100];
the compiler allocates the memory for it (100 chars in this case) and command points to the start of that memory. You can access the memory you've allocated:
command[0] = 'a'; // OK
command[99] = 'A'; // OK
command[100] = 'Z'; // Error: out of bounds
but you cannot change the value of command:
command = NULL; // Compile-time error
The memory will be automatically freed when command goes out of scope.
When you declare a pointer:
char *commandptr;
you only create a single variable for pointing to chars, but it doesn't point to anything yet. Trying to use it without initialising it is an error:
commandptr[0] = 'A'; // Undefined behaviour; probably a segfault
You need to allocate the memory yourself using malloc:
commandptr = malloc(100);
if (commandptr) {
// Always check that the return value of malloc() is not NULL
commandptr[0] = 'A'; // Now you can use the allocated memory
}
and free it when you've finished with it:
free(commandptr);
A: Why would you change it? For temporary buffers, people usually declare the arrays with [] so they don't have to worry about garbage disposal.
| |
doc_4574
|
Error: Objects are not valid as a React child (found: object with keys
{sds, id}). If you meant to render a collection of children, use an
array instead.
import React, {Component} from 'react';
import axios from "axios";
class Post extends Component {
constructor() {
super();
this.state={
postData : "",
postResult : " "
}
}
onChangeHandler=(event)=>{
var mydata= event.target.value;
this.setState({postData:mydata})
}
onClickHandler=()=>{
axios.post('http://jsonplaceholder.typicode.com/posts',this.state.postData)
.then(response=>{
this.setState({postResult:response.data})
})
.catch(error=>{
alert("something wrong")
})
}
render() {
return (
<div>
<p>{this.state.postResult}</p>
<input onChange={this.onChangeHandler} type="text"/>
<button onClick={this.onClickHandler}>Post</button>
</div>
);
}
}
export default Post;
A: based on jsonplaceholder your response.data will be an object following the structure:
{title: "myTitle", body: "myBody", id: 101}
this way this.state.postResult will be an object and you can't pass an object to render which results in the error experienced. Instead you can extract title and body from postResult for example:
render() {
const { title, body } = this.state.postResult
return (
<div>
<h1>Title: {title}</h1>
<p>Body: {body}</p>
<input onChange={this.onChangeHandler} type="text"/>
<button onClick={this.onClickHandler}>Post</button>
</div>
);
}
| |
doc_4575
|
jQuery('#changeImage').hover(function()
{
jQuery('body').css("background", "url(img/globo.jpg) center top #1C1C1C fixed"),
jQuery('body').css("background-size", "cover");
}).mouseleave(function(){
jQuery('body').css("background", "#1C1C1C");
});
CSS:
#image {
background:url("img/image.jpg") fixed top center;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(src='.myBackground.jpg', sizingMethod='scale');
-ms-filter: "progid:DXImageTransform.Microsoft.AlphaImageLoader(src='myBackground.jpg', sizingMethod='scale')";
}
to change a background image on hover.
However, I would like to add some fading animation, in order to make the experience more soft. How could I achieve that?
A: No need to JavaScript here, you can achieve what you're looking for through CSS alone:
#image{
background:url(img/image.jpg) fixed top center;
background-size:cover;
}
#image:before{
background:url(img/globo.jpg) center top #1C1C1C fixed
background-size:cover;
content:"";
display:block;
left:0;
height:100%;
opacity:0;
top:0;
transition:opacity .5s linear;
width:100%;
}
#image:hover:before{opacity:1;}
Updated following sample link provided in comments:
JavaScript
jQuery('#changeImage').hover(function(){
jQuery('body').css("background-image", "url(img/globo.jpg)"),
jQuery('body').addClass("reveal");
}).mouseleave(function(){
jQuery('body').removeClass("reveal");
});
CSS
body{
background-attachment:fixed;
background-color:#1c1c1c;
background-position:center top;
background-size:cover;
}
body:before{
background:#1c1c1c;
bottom:0;
content:"";
display:block;
left:0;
opacity:1;
position:fixed;
right:0;
top:0;
transition:opacity .5s linear;
}
body.reveal:before{
opacity:0;
}
I don't use JQuery so can't fully test this, let me know if you have any problems with it.
| |
doc_4576
|
For example, if I have a blob stored in Archive, I access it by rehydrating it to Hot/Cool. Once I am done, is there a way Azure can automatically downtier it?
A: Moving to another tier not accessed blobs is possible using native functionality but for the moment this is limited to France Central, Canada East, and Canada Central as the feature is in preview.
In order to use the Last accessed option, select Access tracking enabled on the Lifecycle Management page in the Azure portal.
And then define a rule based on the Last accessed
More details you may find here
A: This is now generally available as of 2019 from Microsoft. Now you can -
*
*Automatically change the blob tier after N days.
*Automatically remove the blob after N days.
Azure Blob lifecycle management overview
A: All tier changes must be performed by you; there is no automatic tier-change method built-in. You'll need to make a specific call to set the tier for each tier change (note - I pointed to the REST API, but various language-specific SDKs wrap the call as well).
A: Please see this Azure Feedback question for updates on automated object lifecycle policies for Azure Storage Blobs (as well as a description of a workaround using Logic Apps). The question pertains to blob TTL, but tiering policies will also be possible with both the workaround and ultimately using the policy framework.
| |
doc_4577
|
So people my datagriview
Sub listar_CC_configurados()
dtc = Negcc.Listar_CC_configurados(VGlobales.Base)
For Each data As DataRow In dtc.Rows
Dim aa As Integer = Me.dgvlistccconfigurados.Rows.Add()
'Me.dgv1.Rows(aa).Cells(0).Value = data("ACCION").ToString().Trim
Me.dgvlistccconfigurados.Rows(aa).Cells(0).Value = data("IDSUCURSAL").ToString().Trim
Me.dgvlistccconfigurados.Rows(aa).Cells(1).Value = data("SUCURSAL").ToString()
Me.dgvlistccconfigurados.Rows(aa).Cells(2).Value = data("IDALMACEN").ToString()
Me.dgvlistccconfigurados.Rows(aa).Cells(3).Value = data("ALMACEN").ToString()
Me.dgvlistccconfigurados.Rows(aa).Cells(4).Value = data("IDCC").ToString()
Me.dgvlistccconfigurados.Rows(aa).Cells(5).Value = data("CC").ToString()
Me.dgvlistccconfigurados.Rows(aa).Cells(6).Value = data("PERIODO_INICIO").ToString()
Me.dgvlistccconfigurados.Rows(aa).Cells(7).Value = data("PERIODO_FIN").ToString()
Next
End Sub
In the CellEndit event, I perform the validation only of cell No. 4 as follows
Private Sub dgvlistccconfigurados_CellEndEdit(ByVal sender As System.Object, ByVal e As System.Windows.Forms.DataGridViewCellEventArgs) Handles dgvlistccconfigurados.CellEndEdit
Dim clave As String
Dim nlinea As Integer
If e.ColumnIndex = 4 Then
clave = dgvlistccconfigurados.Rows(dgvlistccconfigurados.CurrentRow.Index).Cells(4).Value.ToString
nlinea = dgvlistccconfigurados.CurrentRow.Index
For i As Integer = 0 To dgvlistccconfigurados.Rows.Count - 1
If clave = dgvlistccconfigurados.Rows(i).Cells(4).Value.ToString And i <> nlinea Then
dgvlistccconfigurados.Rows(nlinea).Cells(4).Value = ""
MsgBox("esta repetido el codigo")
SendKeys.Send("{UP}")
Exit Sub
End If
Next
End If
End Sub
End Class
Now I can't find a way to validate the 4 cells you enter if they exist in the same datagridview, I can only do it with the 4 cell.
Any idea what I would be missing to perform this requirement?
I am applying the following code but it gives me an error
Private Sub dgvlistccconfigurados_CellEndEdit(ByVal sender As System.Object, ByVal e As System.Windows.Forms.DataGridViewCellEventArgs) Handles dgvlistccconfigurados.CellEndEdit
Dim clave As String
Dim suc, almacen As String
Dim nlinea As Integer
If e.ColumnIndex = 4 Then
clave = dgvlistccconfigurados.Rows(dgvlistccconfigurados.CurrentRow.Index).Cells(4).Value.ToString
suc = dgvlistccconfigurados.Rows(dgvlistccconfigurados.CurrentRow.Index).Cells(0).Value.ToString
almacen = dgvlistccconfigurados.Rows(dgvlistccconfigurados.CurrentRow.Index).Cells(2).Value.ToString
nlinea = dgvlistccconfigurados.CurrentRow.Index
For i As Integer = 0 To dgvlistccconfigurados.Rows.Count - 1
If Convert.ToString(clave And suc And almacen) = Val(dgvlistccconfigurados.Rows(i).Cells(4).Value And dgvlistccconfigurados.Rows(i).Cells(0).Value And dgvlistccconfigurados.Rows(i).Cells(2).Value) And i <> nlinea Then
dgvlistccconfigurados.Rows(nlinea).Cells(4).Value = ""
MsgBox("esta repetido el codigo")
SendKeys.Send("{UP}")
Exit Sub
End If
Next
End If
End Sub
Conversion of string to type 'Long' is invalid.
| |
doc_4578
|
*
*Let N_PROD, N_CON represent the number of producer/consumer threads respectively that are created, and let BUF_SIZE represent size of buffer.
*Each producer thread has to select 2 random prime numbers, multiply them, and add the result to a buffer
*LetTOTAL_PRIMES represent how many times in total elements are added or removed from buffer. Obviously some threads may access buffer more than others (e.g., if TOTAL_PRIMES=5 and N_PROD=3, then it's perfectly fine if producer1 adds three different products to buffer, producer2 adds two, and producer3 adds none).
*The spawned threads should only begin working after the main thread has created all producers/consumers.
Each time I execute the program though, it never seems to surpass 3 or 4 prime products. Further, it seems to keep writing the same data.
My main question is why doesn't the program ever reach the end? I can't seem to figure out where it's getting stuck. I don't know if the repeated data is a consequence of the first problem, or if they're two separate issues.
Here is what I think is the most relevant part of my code (so not including any functions that simply do some calculations or output something to console):
// Declaration of thread condition variable
pthread_cond_t start_cond = PTHREAD_COND_INITIALIZER;
// declaring mutex
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
lluint buf[BUF_SIZE];
int p_count = 0;
int c_count = 0;
int all_started = 0;
int pdone = 0;
int cdone = 0;
//////////////////////////////////
sem_t* sem_access;
sem_t* sem_occupy; //how many elements currently in buffer
sem_t* sem_free; //how much free space we have
sem_t* sem_all_started;
sem_t* sem_buf;
///////////////////////////////////////////
// int main()
int main(int argc, char* argv[])
{
close_Semaphors();
pthread_t p_threads[N_PROD];
pthread_t c_threads[N_CONS];
open_all_sem(); //helper function to opens all semaphores listed above
int rc, t;
for (t=0; t<N_PROD; t++)
{
rc = pthread_create(&p_threads[t], NULL, producer, (void*)t);
if (rc)
{
printf("ERROR; return code from pthread_create() is %d\n", rc);
exit(-1);
}
}
sem_wait(sem_access);
printf("main thread created all producer threads\n");
sem_post(sem_access);
for (t=0; t<N_CONS; t++)
{
rc = pthread_create(&c_threads[t], NULL, consumer, (void*)t);
if (rc)
{
printf("ERROR; return code from pthread_create() is %d\n", rc);
exit(-1);
}
}
printf("main thread created all consumer threads\n");
broadcast_all_started();
for (t=0; t<N_PROD; t++)
{
rc = pthread_join(p_threads[t], NULL);
if (rc != 0)
{
printf("Error joining producer thread %d\n", t+1);
exit(-1);
}
}
for (t=0; t<N_CONS; t++)
{
rc = pthread_join(c_threads[t], NULL);
if (rc != 0)
{
printf("Error joining consumer thread %d\n", t+1);
exit(-1);
}
}
printf("Back in main thread\n");
sem_unlink("/sem_free");
sem_unlink("/sem_access");
sem_unlink("/sem_occupy");
sem_unlink("/sem_all_started");
sem_unlink("/sem_buf");
printf("Goodbye!\n");
close_Semaphors();
return 0;
}
void producer(void* id)
{
wait_in_threads_until_all_start();
int p_id = (int)id + 1;
while (!pdone)
{
lluint prime1, prime2, primeProd;
prime1 = getPrimeNum();
prime2 = getPrimeNum();
primeProd = prime1 * prime2;
sem_wait(sem_free);
sem_wait(sem_access);
write_add_to_buf_msg(p_id, prime1, prime2, primeProd);
write_producer_is_done(p_id); //simply outputs something
sem_post(sem_access);
sem_post(sem_occupy);
p_count++;
if (p_count == TOTAL_MSG)
{
printf("all producers terminated\n");
pdone = 1;
}
}
if (pdone == 1)
pthread_exit(NULL);
}
void add_to_buf (lluint prod)
{
sem_wait(sem_buf);
int val;
sem_getvalue(sem_occupy, &val);
buf[val] = prod;
sem_post(sem_buf);
}
void consumer(void* id)
{
wait_in_threads_until_all_start();
int c_id = (int)id+1;
while (!cdone)
{
sem_wait(sem_occupy);
lluint prod, factor1, factor2;
sem_wait(sem_access);
write_remove_from_buf_msg(c_id, &prod); //within this we call remove_from_buf
find_two_factors(prod, &factor1, &factor2);
printf(" = %lli x %lli\n", factor1, factor2);
write_consumer_is_done(c_id);
sem_post(sem_access);
sem_post(sem_free);
c_count++;
if (c_count == TOTAL_MSG)
{
printf("all consumers terminated\n");
cdone = 1;
}
}
if (cdone == 1)
pthread_exit(NULL);
}
void remove_from_buf(lluint* prod)
{
sem_wait(sem_buf);
int val;
sem_getvalue(sem_occupy, &val);
*prod = buf[val];
sem_post(sem_buf);
}
void wait_in_threads_until_all_start()
{
sem_post(sem_all_started);
pthread_mutex_lock(&lock);
if (all_started == 0)
{
pthread_cond_wait(&start_cond, &lock);
}
pthread_mutex_unlock(&lock);
}
void all_threads_ready()
{
pthread_mutex_lock(&lock);
all_started = 1;
pthread_cond_broadcast(&start_cond);
pthread_mutex_unlock(&lock);
}
void close_Semaphors(void)
{
sem_close(sem_access);
sem_close(sem_occupy);
sem_close(sem_free);
sem_close(sem_all_started);
sem_close(sem_buf);
}
Here is the output from 4 separate runs of the program (each time I had to enter ``cntl+c``` to stop it)
Output1.txt
main thread created all producer threads
main thread created all consumer threads
producer #1 going to add product: 15347 = 103 x 149
producer #1 is done
producer #2 going to add product: 15347 = 103 x 149
producer #2 is done
consumer #3 just removed: 15347 = 103 x 149
consumer #3 is done
consumer #1 just removed: 15347 = 103 x 149
consumer #1 is done
producer #1 going to add product: 19367 = 107 x 181
producer #1
Output2.txt
main thread created all producer threads
main thread created all consumer threads
producer #1 going to add product: 15347 = 103 x 149
producer #1
Output3.txt
main thread created all producer threads
main thread created all consumer threads
producer #1 going to add product: 15347 = 103 x 149
producer #1 is done
consumer #3 just removed: 15347 = 103 x 149
consumer #3 is done
producer #1 going to add product: 19367 = 107 x 181
producer #1 is done
consumer #2 just removed: 19367 = 107 x 181
consumer #2 is done
producer #1 going to add product: 27221 = 163 x 167
producer #1 is done
consumer #1 just removed: 27221 = 163 x 167
consumer #1
Output4.txt
main thread created all producer threads
main thread created all consumer threads
producer #1 going to add product: 15347 = 103 x 149
producer #1 is done
consumer #3 just removed: 15347 = 103 x 149
consumer #3 is done
producer #1 going to add product: 19367 = 107 x 181
producer #1 is done
consumer #1 just removed: 19367 = 107 x 181
consumer #1 is done
producer #1 going to add product: 27221 = 163 x 167
producer #1 is done
consumer #2 just removed: 27221 = 163 x 167
consumer #2 is done
producer #1 going to add product: 20651 = 107 x 193
producer #1 is done
consumer #3 just removed: 20651 = 107 x 193
consumer #3 is done
producer #1 going to add product: 30967 = 173 x 179
producer #1 is done
consumer #1 just removed: 30967 = 173 x 179
consumer #1
| |
doc_4579
|
project/
src/
CMakeLists.txt
foo.h
main.cpp
util/
CMakeLists.txt
bar.h
Inside of bar.h is it possible to have an include statement as #include "foo.h" ? I've seen some possible ways to do this via "precompiled headers" configurations in Visual Studio, but not using CMake directly.
A: Good news is that compilation proces in CMake works with target dependencies, not with file dependencies. So you don't need to change your file hierarchy, you just need to modify target hierarchy, but not too much.
*
*In file src/CMakeLists.txt, create extra INTERFACE target named for example src_interface
add_library( src_interface INTERFACE )
*Add your include directory to it:
target_include_directories( src_interface INTERFACE ${CMAKE_CURRENT_SOURCE_DIR} )
*This needs to be done before you add util as subdirectory
add_subdirectory( util )
*In src/util/CMakeLists.txt, add linking to the interface:
target_link_libraries( util_target PRIVATE src_interface )
More about INTERFACE libraries in CMake: https://cmake.org/cmake/help/latest/command/add_library.html#interface-libraries
| |
doc_4580
|
Doesn't seem to be any way in XML Diff to specify, "ignore these tags".
I could roll through the file, find each instance, find the end of it, delete it out, but I'm hoping there will be something simpler. If not, oh well.
Edit: Here's a piece of the XML:
<numericValue color="-103" hidden="no" image="stuff.jpg" key="More stuff." needsQuestionFormatting="false" system="yes" systemEquivKey="Stuff." systemImage="yes">
<numDef increment="1" maximum="180" minimum="30">
<unit deprecated="no" key="BPM" system="yes" />
</numDef>
</numericValue>
A: If you are using Linq to XML, you can load your XML into an XDocument via:
var doc = XDocument.Parse(xml); // Load the XML from a string
Or
var doc = XDocument.Load(fileName); // Load the XML from a file.
Then search for all elements with matching names and use System.Xml.Linq.Extensions.Remove() to remove them all at once:
string prefix = "L"; // Or whatever.
// Use doc.Root.Descendants() instead of doc.Descendants() to avoid accidentally removing the root element.
var elements = doc.Root.Descendants().Where(e => e.Name.LocalName.StartsWith(prefix, StringComparison.Ordinal));
elements.Remove();
Update
In your XML, the color="-103" substring is an attribute of an element, rather than an element itself. To remove all such attributes, use the following method:
public static void RemovedNamedAttributes(XElement root, string attributeLocalNamePrefix)
{
if (root == null)
throw new ArgumentNullException();
foreach (var node in root.DescendantsAndSelf())
node.Attributes().Where(a => a.Name.LocalName == attributeLocalNamePrefix).Remove();
}
Then call it like:
var doc = XDocument.Parse(xml); // Load the XML
RemovedNamedAttributes(doc.Root, "color");
| |
doc_4581
|
What should I do then? Is there any way to use v2.3 for Graph Api? As per the sdk, my current Graph Api is 2.9
A: Use user_managed_groups instead, there is no other way. You can only access groups you manage and you cannot go back to an older API version. Even with an older App it would be pointless, see deprecation info in the changelog: https://developers.facebook.com/docs/apps/changelog
| |
doc_4582
|
namespace Models
{
[StructLayout(LayoutKind.Explicit, Size = 120, CharSet = CharSet.Unicode)]
public struct DynamicState
{
[FieldOffset(0)]
public double[] Position;
[FieldOffset(24)]
public double[] Velocity;
[FieldOffset(48)]
public double[] Acceleration;
[FieldOffset(72)]
public double[] Attitude;
[FieldOffset(96)]
public double[] AngularVelocity;
}
}
and C++/CLI method:
Models::DynamicState SomeClassClr::DoSomething(Models::DynamicState ds)
{
int struct_size = Marshal::SizeOf(ds);
System::IntPtr ptr = Marshal::AllocHGlobal(struct_size);
DynamicStateStruct ds_struct;
struct_size = sizeof(ds_struct);
Marshal::StructureToPtr(ds, ptr, false);
ds_struct = *(DynamicStateStruct*)ptr.ToPointer();
Models::DynamicState returnVal;
mpSomeClass->doSomething(ds_struct);
return returnVal;
}
where DynamicStateStruct is a native C++ class:
struct DynamicStateStruct
{
double mPosition[3];
double mVelocity[3];
double mAcceleration[3];
double mAttitude[3];
double mAngularVelocity[3];
};
When I recover the struct (ds_struct) in native C++ I am not getting the correct values, any ideas with what I am missing?
A: Try the following variant:
public struct DynamicState
{
[MarshalAs (UnmanagedType.ByValArray, SizeConst=3)]
public double[] Position;
[MarshalAs (UnmanagedType.ByValArray, SizeConst=3)]
public double[] Velocity;
[MarshalAs (UnmanagedType.ByValArray, SizeConst=3)]
public double[] Acceleration;
[MarshalAs (UnmanagedType.ByValArray, SizeConst=3)]
public double[] Attitude;
[MarshalAs (UnmanagedType.ByValArray, SizeConst=3)]
public double[] AngularVelocity;
}
Another option is to use fixed array available in unsafe code:
public unsafe struct DynamicState
{
public fixed double Position[3];
public fixed double Velocity[3];
public fixed double Acceleration[3];
public fixed double Attitude[3];
public fixed double AngularVelocity[3];
}
P.S. A good guide on .Net interop can be found here: http://www.mono-project.com/Interop_with_Native_Libraries
| |
doc_4583
|
$.when(postrequest1, postrequest2).then(function () {
// how do I access the results of postrequest1 and postrequest 2 here?
});
A: $.when(postrequest1, postrequest2).then(function (data1,data2) {
// data1, data2 are arrays of the form [ "success", statusText, jqXHR ]
});
Simply give data arguments to the anonymous callback function. See $.when() for more details.
A: try this
$.when(postrequest1, postrequest2).then(function (a1,a2) {
var jqXHR1 = a1[2]; /* arguments are [ "success", statusText, jqXHR ] */
alert(jqXHR1.responseText);
var jqXHR2 = a2[2];
alert(jqXHR2.responseText);
});
a1 and a1 are arguments for 1st and 2nd ajax request respectively...
a1 and a2 are array each having keys as (success,statusText,jqXHR)
you can then handle them individually.
Documentation :http://api.jquery.com/jQuery.when/
A: Have you tried this?
$.when(postrequest1, postrequest2).then(function (postData1, postData2) {
});
(As long as the post requests are single requests, otherwise the then params can be arrays)
| |
doc_4584
|
My codes are as follows:
final File wallpaperDirectory = new File("/sdcard/Wallpapertask/");
wallpaperDirectory.mkdirs();
myImageView1.setImageBitmap(circleBitmap);
BitmapDrawable drawable = (BitmapDrawable)myImageView1.getDrawable();
Bitmap bitmap = drawable.getBitmap();
File sdCardDirectory = Environment.getExternalStorageDirectory();
File image = new File(sdCardDirectory, "test2.png");
A:
How can I save my images in the subfolder?
Your code does not save any images. You can use the compress() method on Bitmap to write a Bitmap to an OutputStream.
Also note that you should not be hardcoding paths, like /sdcard/Wallpapertask, as that path will not work on all devices or for all users.
| |
doc_4585
|
I can easily imagine that array is some kind of special object, but why function also object type? There's nothing similar between object and function. How to implement a special object to work like a function? I have no idea.
A: There is no way to implement an object to work like a function, but you can use a function as an object.
var a = function(){}
a.x = 5;
console.log(a.x);
The reason everything is an object like that is just a design choice that, for some, is considered elegant. I particularly don't agree, but people don't agree with things, in general.
Also, I should note that a point that is probably being made is that JS functions are first-class. This means you can use a function like you could use any other value, which includes sending them as arguments to other functions and returning functions from functions and so on. Examples below:
// return functions from function
function create_a_function_that_doubles_numbers(){
return function(a){ return a*2; }
}
var double_number = create_a_function_that_doubles_numbers();
console.log(double_number(5)); // 10
// sending a function to a function
function call_twice(f){
f(); f();
};
function shout(){
alert("HEY");
};
call(shout);
// Here we send "shout" to "call_twice" which... calls it twice,
// so a "HEY" popup appears two times.
| |
doc_4586
|
<?php
$blocks = parse_blocks( $pid->post_content );
foreach ( $blocks as $block ) {
if ( $block['blockName'] === 'acf/your-block-name' ) {
//do something
}
}
is not working.
A: You need to create a recursive function. The code will look like:
<?php
$blocks = parse_blocks($pid->post_content);
foreach ($blocks as $block) {
$myAcfBlock = getMyAcfBlock($block);
if($myAcfBlock){
//do something
}
}
function getMyAcfBlock($blockObject)
{
if ($blockObject['blockName'] === 'acf/your-block-name') {
return $blockObject;
}
if (!empty($blockObject['innerBlocks'])) {
foreach ($blockObject['innerBlocks'] as $innerBlock) {
$innerBlockObject = getLandigFormBlock($innerBlock);
if ($innerBlockObject) {
return $innerBlockObject;
}
}
}
return false;
}
| |
doc_4587
|
(What I'm trying to do is have a function called on left click of the menu item, but have it show the menu on right click)
Here's my code
//Get reference to main system status bar
let statusItem = NSStatusBar.systemStatusBar().statusItemWithLength(-1)
statusItem.image = icon
statusItem.menu = menuBar
if let statusButton = statusItem.button
{
statusButton.target = self
statusButton.action = #selector(statusItemClicked)
statusButton.sendActionOn(Int(NSEventMask.RightMouseUpMask.rawValue | NSEventMask.LeftMouseUpMask.rawValue))
}
Original Answer with code Left vs Right Click Status Bar Item Mac Swift 2
A: Bitwise OR, just like it does in most C-like languages. In this context, it's being used to combine flags.
A: That must be really old code. Nowadays, in modern Swift, NSEventMask is an Option Set. You can just say [NSEventMask.rightMouseUp, NSEventMask.leftMouseUp], and you don't need the Int cast at all. (Or if you haven't updated to Swift 3 yet, the case names would start with capital letters.)
| |
doc_4588
|
I've seen answers stating that I can use localhost.charlesproxy.com, but that means changing a lot of config files and having to no remember "oh yeah, I can't use localhost:3000 any more, I gotta use localhost.charlesproxy.com:3000 now". It's not the end of the world, but it's a little annoying.
I've looked into Internet Options -> Connections -> LAN Settings -> Proxy Server - Advanced and nothing seems out of the ordinary.
I disabled my firewall
Made sure that Charles is allowed to communicate through the Firewall (even though it's disabled)
I've uninstalled / reinstalled Charles a number of times and deleted the app settings found in C:\Users\[USER]\AppData\Roaming\Charles but Charles still has no love for localhost requests... On the other hand, Fiddler can handle the localhost requests, but I'm much more comfortable using Charles and it's UI.
One last thing, it's probably not worth mentioning, but I've noticed that Charles doesn't list /sandman requests anymore. Fiddler does, but a quick and cursory google search doesn't provide any details as to what that request does or if it's important. But I thought I'd mention it because it did stand out to me...
So yeah, does anyone have any idea where else I could look to get Charles up an running again? If I can't use Charles daily, I'll have a tiny, flower vase shaped hole in my heart...
A: I've also been struggling with this, probably happened around the same time too. I will update this answer with any more I discover, but for now I've found a firefox fix:
Set network.proxy.allow_hijacking_localhost to true in about:config
I suspect all the major browsers released a modification at the same time.
References
*
*https://bugzilla.mozilla.org/show_bug.cgi?id=1535581
| |
doc_4589
|
error
i want draw circle with ndk. but this kind error occure...
help me to find the solution friends...
this is my jni/ndkfoo.c
#include <string.h>
#include <jni.h>
#include <GLES/gl.h>
#include <GLES/glext.h>
jstring Java_com_ndkfoo_DemoActivity_invokeNativeFunction(JNIEnv* env, jobject javaThis) {
return (*env)->NewStringUTF(env, "Hello start for horse race!");
}
void rasterCircle(int x0, int y0, int radius)
{
int f = 1 - radius;
int ddF_x = 1;
int ddF_y = -2 * radius;
int x = 0;
int y = radius;
setPixel(x0, y0 + radius);
setPixel(x0, y0 - radius);
setPixel(x0 + radius, y0);
setPixel(x0 - radius, y0);
while(x < y)
{
// ddF_x == 2 * x + 1;
// ddF_y == -2 * y;
// f == x*x + y*y - radius*radius + 2*x - y + 1;
if(f >= 0)
{
y--;
ddF_y += 2;
f += ddF_y;
}
x++;
ddF_x += 2;
f += ddF_x;
setPixel(x0 + x, y0 + y);
setPixel(x0 - x, y0 + y);
setPixel(x0 + x, y0 - y);
setPixel(x0 - x, y0 - y);
setPixel(x0 + y, y0 + x);
setPixel(x0 - y, y0 + x);
setPixel(x0 + y, y0 - x);
setPixel(x0 - y, y0 - x);
}
}
thanks in advance.
A: You have not told it to link against the gl library which provides the setPixel function.
See the jni/Android.mk for one of the gl application examples.
| |
doc_4590
|
Open Volume Adj Close Ticker
Date
2006-11-22 140.750000 45505300 114.480649 SPY
I want to change df to another dataframe Open price like below:
SPY AGG
Date
2006-11-22 140.750000 NA
It only use open's data and two tickers, so how to change one dataframe to another?
A: I think you can use DataFrame constructor with reindex by list of ticker L:
L = ['SPY','AGG']
df1 = pd.DataFrame({'SPY': [df.Open.iloc[0]]},
index=[df.index[0]])
df1 = df1.reindex(columns=L)
print (df1)
SPY AGG
2006-11-22 140.75 NaN
You can use read_html for find list of Tickers:
df2 = pd.read_html('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies', header=0)[0]
#print (df2)
#filter only Ticker symbols starts with SP
df2 = df2[df2['Ticker symbol'].str.startswith('SP')]
print (df2)
Ticker symbol Security SEC filings \
407 SPG Simon Property Group Inc reports
415 SPGI S&P Global, Inc. reports
418 SPLS Staples Inc. reports
GICS Sector GICS Sub Industry \
407 Real Estate REITs
415 Financials Diversified Financial Services
418 Consumer Discretionary Specialty Stores
Address of Headquarters Date first added CIK
407 Indianapolis, Indiana NaN 1063761
415 New York, New York NaN 64040
418 Framingham, Massachusetts NaN 791519
#convert column to list, add SPY because missing
L = ['SPY'] + df2['Ticker symbol'].tolist()
print (L)
['SPY', 'SPG', 'SPGI', 'SPLS']
df1 = pd.DataFrame({'SPY': [df.Open.iloc[0]]},
index=[df.index[0]])
df1 = df1.reindex(columns=L)
print (df1)
SPY SPG SPGI SPLS
2006-11-22 140.75 NaN NaN NaN
A: Suppose you have a list of data frame df_list for different tickers and every item of of list have the same look of the df in your example
You can first concatenate them into one frame with
df1 = pd.concat(df_list)
Then with
df1[["Open", "Ticker"]].reset_index().set_index(["Date", "Ticker"]).unstack()
It should give you an output like
Open
Ticker AGG SPY
Date
2006-11-22 NAN 140.75
| |
doc_4591
|
Update: added Gemfile below for reference. When deleting Gemfile.lock prior to performing bundle update, the below error message is still generated.
Error Message
Bundler could not find compatible versions for gem "activemodel":
In Gemfile:
rails (~> 5.1.4) was resolved to 5.1.4, which depends on
activemodel (= 5.1.4)
web-console (~> 2.0.0.beta3) was resolved to 2.0.0, which depends on
activemodel (~> 4.0)
Bundler could not find compatible versions for gem "activesupport":
In Gemfile:
jbuilder (~> 2.7.0) was resolved to 2.7.0, which depends on
activesupport (>= 4.2.0)
rails (~> 5.1.4) was resolved to 5.1.4, which depends on
activesupport (= 5.1.4)
Bundler could not find compatible versions for gem "rails":
In Gemfile:
rails (~> 5.1.4)
Could not find gem 'rails (~> 5.1.4)' in any of the sources.Bundler could not
find compatible versions for gem "activemodel":
In Gemfile:
rails (~> 5.1.4) was resolved to 5.1.4, which depends on
activemodel (= 5.1.4)
web-console (~> 2.0.0.beta3) was resolved to 2.0.0, which depends on
activemodel (~> 4.0)
Bundler could not find compatible versions for gem "activesupport":
In Gemfile:
jbuilder (~> 2.7.0) was resolved to 2.7.0, which depends on
activesupport (>= 4.2.0)
rails (~> 5.1.4) was resolved to 5.1.4, which depends on
activesupport (= 5.1.4)
Bundler could not find compatible versions for gem "rails":
In Gemfile:
rails (~> 5.1.4)
Could not find gem 'rails (~> 5.1.4)' in any of the sources.
Gemfile
source 'https://rubygems.org'
gem 'rails', '~> 5.1.4'
gem 'sass-rails'
gem 'bootstrap-sass'
gem 'bcrypt'
gem 'uglifier', '~> 3.2.0'
gem 'coffee-rails', '~> 4.2.2'
gem 'jquery-rails', '~> 4.3.1'
gem 'turbolinks', '~> 5.0.1'
gem 'jbuilder', '~> 2.7.0'
gem 'sdoc', '~> 0.4.0', group: :doc
gem 'stripe'
group :development, :test do
gem 'sqlite3', '~> 1.3.9'
gem 'byebug', '~> 3.4.0'
gem 'web-console', '~> 2.0.0.beta3'
gem 'spring', '~> 1.1.3'
end
group :test do
gem 'minitest-reporters', '~> 1.0.5'
gem 'mini_backtrace', '~> 0.1.3'
gem 'guard-minitest', '~> 2.3.1'
end
group :production do
gem 'pg', '0.17.1'
gem 'rails_12factor', '0.0.2'
gem 'puma'
end
A: The gem web-console is locking your update process, first change it to a more recent version like:
gem 'web-console', '>= 3.3.0'
Then remove Gemfile.lock and run bundle install also always is good to check the version of the other gems and check the official Rails upgrade process documentation in http://guides.rubyonrails.org/upgrading_ruby_on_rails.html
| |
doc_4592
|
AUTO PLAY FUNCTION OF MY JS.
function(a, b, c) {
var d = function(b) {
this.core = b, this.core.options = a.extend({}, d.Defaults, this.core.options), this.handlers = {
"translated.owl.carousel refreshed.owl.carousel": a.proxy(function() {
this.autoplay()
}, this),
"play.owl.autoplay": a.proxy(function(a, b, c) {
this.play(b, c)
}, this),
"stop.owl.autoplay": a.proxy(function() {
this.stop()
}, this),
"mouseover.owl.autoplay": a.proxy(function() {
this.core.settings.autoplayHoverPause && this.pause()
}, this),
"mouseleave.owl.autoplay": a.proxy(function() {
this.core.settings.autoplayHoverPause && this.autoplay()
}, this)
}, this.core.$element.on(this.handlers)
};
d.Defaults = {
autoplay: !1,
autoplayTimeout: 5e3,
autoplayHoverPause: !1,
autoplaySpeed: !1
}, d.prototype.autoplay = function() {
this.core.settings.autoplay && !this.core.state.videoPlay ? (b.clearInterval(this.interval), this.interval = b.setInterval(a.proxy(function() {
this.play()
}, this), this.core.settings.autoplayTimeout)) : b.clearInterval(this.interval)
}, d.prototype.play = function() {
return c.hidden === !0 || this.core.state.isTouch || this.core.state.isScrolling || this.core.state.isSwiping || this.core.state.inMotion ? void 0 : this.core.settings.autoplay === !1 ? void b.clearInterval(this.interval) : void this.core.next(this.core.settings.autoplaySpeed)
}, d.prototype.stop = function() {
b.clearInterval(this.interval)
}, d.prototype.pause = function() {
b.clearInterval(this.interval)
}, d.prototype.destroy = function() {
var a, c;
b.clearInterval(this.interval);
for (a in this.handlers) this.core.$element.off(a, this.handlers[a]);
for (c in Object.getOwnPropertyNames(this)) "function" != typeof this[c] && (this[c] = null)
}, a.fn.owlCarousel.Constructor.Plugins.autoplay = d
}(window.Zepto || window.jQuery, window, document),
A: Try this:
$(document).ready(function(){
var owl = $(".owl-carousel");
owl.owlCarousel({
items: 1,
loop:true,
autoplay: true,
autoPlaySpeed: 5000,
autoPlayTimeout: 5000
autoplayHoverPause: true
});
});
| |
doc_4593
|
*
*for Combobox control nested in DataRepeater control:
private void laduj_Masa_wych_cbx()
{
try
{
da1_przych = new MySqlDataAdapter(query_test, connection);
DataTable ddtt = new DataTable();
da1_przych.Fill(ddtt);
foreach (DataRow row in ddtt.Rows)
{
Masa_wych_cbx.Items.Add(row["masa"]);
}
}
catch (Exception ee)
{
MessageBox.Show(ee.Message);
}
}
*for ComboBox control placed directly in DataRepeater control:
private void laduj_Masa1_wych_cbx()
{
try
{
da1_przych = new MySqlDataAdapter(query_test, connection);
DataTable ddtt = new DataTable();
da1_przych.Fill(ddtt);
foreach (DataRow row in ddtt.Rows)
{
Masa1_wych_cbx.Items.Add(row["masa"]);
}
}
catch (Exception ee)
{
MessageBox.Show(ee.Message);
}
}
In result ComboBox nested in DR control does not contains any data but ComboBox managed by WinForm contains correctly list from a database. My second approch looked like below. I changed both above methods with two methods shown below. In this case both ComboBox controls are correctly filled with data.
*
*for Combobox control nested in DataRepeater control:
private void laduj_Masa_wych_cbx()
{
this.Masa_wych_cbx.Items.Add("0");
this.Masa_wych_cbx.Items.Add("50");
this.Masa_wych_cbx.Items.Add("100");
this.Masa_wych_cbx.Items.Add("130");
this.Masa_wych_cbx.Items.Add("350");
this.Masa_wych_cbx.Items.Add("500");
this.Masa_wych_cbx.Items.Add("1000");
this.Masa_wych_cbx.Items.Add("1500");
this.Masa_wych_cbx.Items.Add("2000");
}
*for ComboBox control placed directly in DataRepeater control:
private void laduj_Masa1_wych_cbx()
{
this.Masa1_wych_cbx.Items.Add("0");
this.Masa1_wych_cbx.Items.Add("50");
this.Masa1_wych_cbx.Items.Add("100");
this.Masa1_wych_cbx.Items.Add("130");
this.Masa1_wych_cbx.Items.Add("350");
this.Masa1_wych_cbx.Items.Add("500");
this.Masa1_wych_cbx.Items.Add("1000");
this.Masa1_wych_cbx.Items.Add("1500");
this.Masa1_wych_cbx.Items.Add("2000");
}
Unfortunately i need to fill data to ComboBoxes dynamicly but it does not work inside DataRepeater control - what to do to repair this state?
A: Try this:
ListItem li = new ListItem(row["masa"]);
Masa1_wych_cbx.Items.Add(li);
| |
doc_4594
|
I tried this
$files =(Get-ChildItem “C:\Users\adm\script\signature\”)
Foreach($file in $files) {
$signature =Get-ChildItem "C:\Users\adm\script\signature\$file"
}
Send-MailMessage -From "administrator@corp.internal" -to "administrator@corp.internal" -Subject "mot de passe compte windows" -Attachments $signature -body "$bodysignature" -BodyAshtm -SmtpServer "smtp.test"
but only 1 files are attached to the mail message .
Do you know how to fix that and why the for each are executed one time only ?
Thank you
A: Your variable $signature get's replaced on every iteration in the foreach, not appended to (+=).
For a better overview I suggest to use splatting.
$Attachments = (Get-ChildItem "C:\Users\adm\script\signature\").FullName
$param = @{
From = "administrator@corp.internal"
To = "administrator@corp.internal"
Subject = "mot de passe compte windows"
Attachments = $Attachments
Body = "$bodysignature"
BodyAshtm = $True
SmtpServer = "smtp.test"
}
Send-MailMessage @param
A: Your search is not recursive.
Try:
Get-ChildItem “C:\Users\adm\script\signature\” -Recurse
Depending on which PowerShell version you are using (dump it via $PSVersionTable) Get-ChildItem has an additional -File parameter, for only returning files, not folders.
So on PowerShell version 5, you can use
Get-ChildItem “C:\Users\adm\script\signature\” -Recurse -File
on older versions, you've to use
Get-ChildItem “C:\Users\adm\script\signature\” -Recurse | where { ! $_.PSIsContainer }
Hope that helps
| |
doc_4595
|
The purpose of this is so I can easily store data using serialize() and return it using unserialize().
I've been looking all day, can't find anything on this subject. Can anyone help get me started or link me to a useful tutorial?
Thanks.
A: You do not need the build-in PHP session for that, you can set the cookie yourself, read it and validate it again, from that point on you got a fully functioning session!
function createCookieString($id, $user, $created)
{
$cookieData = array();
$hash = $this->hashSession($id, $user, $created);
$cookieData[] = $id;
$cookieData[] = $user;
$cookieData[] = $created;
$cookieData[] = $hash;
return implode(':', $cookieData);
}
function hashSession($id, $user, $created)
{
$cookieSalt = 'Your Cookie Salt'; //google what a salt is in hashing if neccecary
return md5($id.$user.$created.$cookieSalt);
}
function parseCookieString($string)
{
return explode(':', $string);
}
To set the cookie, just use the setcookie function of php.
You just have to store the session in your database. Its usually verry simple, just a table with ID, userID, created (beeing the timestamp), you dont need the hash to be in your db because its a reusable secret.
feel free to ask any more questions!
| |
doc_4596
|
I am trying to disable right click on an element. I found a solution which does it like so:
<div oncontextmenu='return false'></div>
But since it is not a good practice to have event handlers in html, I tried something like:
<div id='test'></div>
and on the js part of the code:
let test = document.getElementById('test')
test.addEventListener('contextmenu', (e) => {
console.log('right click!')
return false
})
let test2 = document.getElementById('test2')
test2.addEventListener('contextmenu', (e) => {
console.log('right click!')
return false
})
div {
height: 100px;
width: 100px;
border: 1px solid black;
}
#test1 {
background-color: red;
}
#test2 {
background-color: blue;
}
<div id='test1' oncontextmenu='return false'></div>
<div id='test2'></div>
Right click on test1 would be successfully disabled, but not on test2 and the console proves that the program control did reach the handler.
I am not looking for a workaround as
<div id='test'></div>
let test = document.getElementById('test')
test.addEventListener('contextmenu', (e) => {
e.preventDefault()
console.log('right click!')
})
works perfectly.
I want to know, why are the two elements in the above snippet behaving differently?
A: This is weird behaviour. For some reason, a return false from an .addEventListener does not work (tested in latest stable chrome). You need to call the .preventDefault() method as shown below and return false to get what you want in modern and ancient browsers. A right click is logged to the console but no context menu shows up.
Note that I added some semicolons for style. Although in your example they were not technically needed, it is a good practice to include them.
let test2 = document.getElementById('test2')
test2.addEventListener('contextmenu', (e) => {
console.log('right click!');
e.preventDefault();//===added this===
return false;
});
div {
height: 100px;
width: 100px;
border: 1px solid black;
}
#test1 {
background-color: red;
}
#test2 {
background-color: blue;
}
<div id='test1' oncontextmenu='return false'></div>
<div id='test2'></div>
| |
doc_4597
|
Count = defaultdict(int)
for l in text:
for m in l['reviews'].split():
Count[m] += 1
print Count
The text is a list that looks like following
[{'ideology': 3.4,
'ID': '50555',
'reviews': 'Politician from CA-21, very liberal and aggressive'},{'ideology': 1.5,
'ID': '10223'
'reviews': 'Retired politician'}, ...]
If I run this code, I get a result like this:
defaultdict(<type 'int'>, {'superficial,': 2, 'awesome': 1,
'interesting': 3, 'A92': 2, ....
What I want to get is a bigram count, instead of unigram count. I tried following code, but I get an error TypeError: cannot concatenate 'str' and 'int' objects
Count = defaultdict(int)
for l in text:
for m in l['reviews'].split():
Count[m, m+1] += 1
I want to use a similar code like this instead of using other codes that already exist in Stackoverflow. Most of the existing codes use word list, but I want to count bigrams directly from the split() which come from the original text.
I want to get a result similar like this:
defaultdict(<type 'int'>, {('superficial', 'awesome'): 1, ('awesome, interesting'): 1,
('interesting','A92'): 2, ....}
Why do I get an error and how do I fix this code?
A: There is solution for counting objects in standard library, called Counter.
Also, with the help of itertools, your bigram counter script can look like this:
from collections import Counter, defaultdict
from itertools import izip, tee
#function from 'recipes section' in standard documentation itertools page
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return izip(a, b)
text = [{'ideology': 3.4, 'ID': '50555',
'reviews': 'Politician from CA-21, very liberal and aggressive'},
{'ideology': 1.5, 'ID': '10223',
'reviews': 'Retired politician'} ]
c = Counter()
for l in text:
c.update(pairwise(l['reviews'].split()))
print c.items()
A: If i understand your question correctly, below codes solve your problem.
Count = dict()
for l in text:
words = l['reviews'].split()
for i in range(0,len(words) -1):
bigram = " ".join(words[i:i+2] )
if not bigram in Count:
Count[bigram] = 1;
else:
Count[bigram] = Count[bigram] + 1
Count would be:
> {'CA-21, very': 1, 'liberal and': 1, 'very liberal': 1, 'and
> aggressive': 1, 'Politician from': 1, 'aggressive Politician': 1,
> 'from CA-21,': 1}
Edit:if you want to use key as tuple just change the join line. python dict hashes tuples too.
A: Do you want to count the number of each two adjacent words ? Make them a tuple.
text = [{'ideology':3.4, 'ID':'50555', 'reviews':'Politician from CA-21, very liberal and aggressive'}]
Count = {}
for l in text:
words = l['reviews'].split()
for i in range(len(words)-1):
if not (words[i],words[i+1]) in Count:
Count[(words[i],words[i+1])] = 0
Count[(words[i],words[i+1])] += 1
print Count
result:
{('and', 'aggressive'): 1, ('from', 'CA-21,'): 1, ('Politician', 'from'): 1, ('CA-21,', 'very'): 1, ('very', 'liberal'): 1, ('liberal', 'and'): 1}
| |
doc_4598
|
I tried to use the documentation, but I didn't understand it well. Specifically, I tried to use:
chrome.windows.getCurrent(function(w) {
chrome.windows.get(w.id,
function (response){
alert(response.location.href);
});
});
But it didn't work. Any ideas?
Thanks
(sorry if the English is bad).
A: 1) have you added the "tabs" permission to the manifest?
{
"name": "My extension",
...
"permissions": ["tabs"],
...
}
2) It also looks like you should be using the tabs API and not the windows API if you want to know the current URL of the selected tab in the current Window
chrome.windows.getCurrent(function(w) {
chrome.tabs.getSelected(w.id,
function (response){
alert(response.url);
});
});
| |
doc_4599
|
Also I cannot make the following code to work I don't know where the problem is:
<?php if ($privilege!= 'ADMIN'){echo
"REG IP $_SERVER['REMOTE_ADDR'];
exit();
}
?>
How does I must write this kind of echo?
A: The user's IP is always part of the request - outputting it won't make anything take longer.
A simple:
echo $_SERVER['REMOTE_ADDR'];
will output the remote user's IP address (as long as it's in a PHP block).
A: Missing "
Better you do it like this:
echo "REG IP $_SERVER['REMOTE_ADDR']";
or concat the string and the variable:
echo 'REG IP ' . $_SERVER['REMOTE_ADDR'];
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.