text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Function to know that a variable is string or number on foxpro I am having a difficult time telling that a variable is string or number on foxpro 6 or other version. I am new in this languages.
A: You can use the VARTYPE or TYPE functions - these are slightly different and you should review the documentation before deciding which to use.
A: Have a look at the Vartype() function - it will return 'N' for numeric, 'C' for character and so on.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using .live() to make jQuery plugin affect new element added via Ajax I am building a stream (similar to Facebook wall) that inserts new posts via Ajax.
I also use the jQuery plugin Linkify to transform any link strings into a clickable element.
When a user enters a link, the post is immediately shown on the page (via Ajax) but Linkify does not affect is since it wasn't in the DOM to begin with. When I reload the page, the link is then clickable.
Is there a way of using .live() to make a plugin affect future DOM elements added by ajax?
Some code:
//-------------Linkify
$('.main_container').linkify();
//-------------stream page structure
<div class="main_container">
<div class="posts_insert">
// target to be replaced via ajax
</div>
<div class="posts">
// text of post #1
</div>
<div class="posts">
// text of post #2
</div>
<div class="posts">
// text of post #3
</div>
</div>
//----------post structure, will replace .posts_insert above
<div class="posts_insert">
// text of post #1
</div>
<div class="posts">
// html
</div>
A: no, .live() is for attaching event handlers. you will need to process new data in your ajax success handler.
A: insert the line $('.main_container').linkify(); into the success function of your ajax call...
eg:
$.ajax({
url: "test.html",
context: document.body,
success: function(result){
//Your functions here
$('.main_container').linkify();
}
});
This makes sure the linkify function is called AFTER new content is added to the page, affecting new posts :)
** EDIT: just clarifying, linkify should be called twice, once on page load and then once on ajax success :)
A: You can upgrade linkify source code itself and use .live() instead of .click() and other event binding used in linkify. There is few code change needed. Just like this one:
Line 239:
$(this).click(function()... ==> $(this).live('click', function()...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to show a marker with Google Maps API v3 + my own map tiles I've used the Google maps API v3 to display a map of the Rice Eccles Stadium floor plan at the University of Utah (where we hold our district science fair called Salt Lake Valley Science & Engineering Fair), I'd like to show a marker for a given table number so judges can find the project they are judging. For the life of me, I can't get a single marker to show up. I've tried adding the zIndex parameter, changing the style, etc. Can someone please give me a ticket to the Clue Train? Thank you.
Here's the page showing my floor plan: http://slvsef.org/map.html
A: You do not match brackets {...} correctly in your function initialize(). See REMOVE/ADD lines below:
function initialize() {
var myLatlng = new google.maps.LatLng(0, 0);
var myOptions = {
center: myLatlng,
zoom: 2,
streetViewControl: false,
mapTypeControl: false,
panControl: false
};
var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions);
map.mapTypes.set('slvsef', MapType);
map.setMapTypeId('slvsef');
// } <===== REMOVE
var marker = new google.maps.Marker({
position: new google.maps.LatLng(0,0),
map: map,
title:"Hello World!"
});
} // <===== ADD
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Adding span tag in Rails link_to I've looked on SO about how to add a <span> tag but I didn't see an example that placed the <span> where I want using Rails 3 link_to:
<a href="#" class="button white"><span id="span">My span </span>My data</a>
I tried something like:
<%= link_to(content_tag{:span => "My span ", :id => "span"} @user.profile.my_data, "#", {:class => "button white"}) %>
But that didn't work.
A: link_to can take a block so I think you're after something like this:
<%= link_to '#', :class => 'button white' do %>
<span id="span">My span </span><%= @user.profile.my_data %>
<% end %>
A: link_to '#', :class => 'button white' do
<span id="span">My span </span>My data
end
A: A combination of the .html_safe with #{@user.profile.my_data} should work as well.
<%= link_to "<span id='span'>My span </span>#{@user.profile.my_data}".html_safe, "#", class: 'button white' %>
You can also use a content_tag so it would look like:
<%= link_to(content_tag(:span, "My span ", id:"span")+"#{@user.profile.my_data}", "#", class: 'button white' %>
They're basically identical but one might be easier on the eyes for you. Also, I'm pretty new to coding so if this is dead wrong for some crazy reason, please just comment and I'll change it. Thanks.
A: In HAML :
= link_to new_post_mobile_path(topic.slug), class: 'add-new-place-btn' do
%span{:class => "glyphicon glyphicon-plus", :style => "margin-right: 4px;"}
New Place
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: Where do I download Android kernel source code (2.2 froyo) I have downloaded Android source code (2.2 froyo) but it doesn't have the kernel source. Where do I download Android 2.2 froyo kernel source code?
Thanks in advance.
A: Here :
git clone git://android.git.kernel.org/kernel/common.git
A: Here is a direct link to a .zip file of the Froyo sources
http://www.bigbluebrains.com/index.php/2010/08/08/browsing-android-2-2-froyo-source-code-in-eclipse/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I filter on composite keys? I need filter this view:
function (doc) {
if (doc.doc_type == 'asd'){
emit([doc.date, doc.string_key_0, doc.string_key_1], doc.list_field);
};
}
I do:
key_filter_0 = ['START_TIME', 'STRING0', 'STRING1']
key_filter_1 = ['END_TIME', 'STRING0', 'STRING1']
VIEW[key_filter_0:key_filter_1]
but the view only filtered with START_TIME and END_TIME. It just ignored the STRING0 / STRING1 key filters.
A: There's no such thing as 'key filters' in CouchDB.
Every item you emit into your view will be sorted by its key, and you can then find all items between a given startkey and endkey. In your case, items are first sorted by date then string_key_0 then string_key_1.
It sounds like you were expecting to only see items between 'START_TIME' and 'END_TIME' where all items had 'STRING0' for the second item and 'STRING1' for the third item, but this is not how CouchDB views work. They are a one-dimensional list of items, sorted by the whole key.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to change the color and strike out the clicked listview item? I am using listview in my program that extends activity. I need to change the color of the clicked list item and strike out the clicked list item. How to do it? any help is really appreciated and thanks in advance...
A: You'll need to create your custom background drawable for list item.
this links helps: http://developer.android.com/guide/topics/resources/drawable-resource.html
A: Well you could add a OnItemClickListener to the list view. And when a item is selected. Set the background of that item and use the following code on the textview to strike it out
textView.setPaintFlags(textView.getPaintFlags() | Paint.STRIKE_THRU_TEXT_FLAG);
A: You need to build the custom adapter for the List.
Also in Custom adapter will need to override the function
@Override
public View getView(int position, View convertView, ViewGroup parent)
{
View view = super.getView(position, convertView, parent);
view.setBackgroundColor(Color.parseColor("#ffffff"));
return view;
}
This way you can do Text Striking out cell color change on selection and when it comes back after some operation.
Please either put your code snippet of you list and adapter will help you better than.
Vib
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why is my app filtered for Android 1.5 and 1.6 devices? Our last update for our application caused it to be filtered out for Android 1.5 and Android 1.6 devices. We did not change anything in our manifest (aside from the updated version code).
These similar questions did not help:
Android App no longer visible by Android 1.5 on devices
Android app not appearing in Market for 1.5&1.6 devices, Bluetooth is android:required="false"
We created a test app with a stripped down manifest and compiled it with the Android 1.5 SDK. Even this basic app is filtered out. We tried contacting Android Market support five days ago but Google makes it pretty clear they don't want to provide support to developers and say that it's unlikely we'll get a reply.
Here is the full AndroidManifest.xml for the test app:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.highwaynorth.test"
android:versionCode="6"
android:versionName="6.0">
<uses-sdk android:minSdkVersion="3" />
<application android:icon="@drawable/icon" android:label="@string/app_name">
<activity android:name=".MainActivity"
android:label="@string/app_name">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Why is this being filtered?
A: Our app came back this week after Google posted this announcement to the Market on October 25th:
25 October 2011 - Recently published apps not appearing in Android Market
We're disappointed that our app was gone from the Market for 6 weeks and all we get at the end in the way of an explanation is basically to say "it's fixed now"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: SQL Server 2008 problem I have table in following structure...
ReturnID SumbitID Status
1 1 1
1 NULL 2
2 2 3
3 3 1
3 3 1
I want this output.....
RetunrID TotalAttempt Success
1 2 1
2 1 0
3 2 2
Count Of ReturnID is TotalAttempt, when ReturnID = sumbitID and status =1 Then it count as success...
Thanks in Advance!
A: Something like this
SELECT
T.ReturnID
, COUNT(T.ReturnID) AS TotalAttempt
, SUM(CASE WHEN T.ReturnID = T.sumbitID AND T.Status = 1 THEN 1 ELSE 0 END) AS Status
FROM dbo.MyTable T
group by T.ReturnID
A: Check this:
SELECT T.ReturnID , COUNT(T.ReturnID) AS TotalAttempt , SUM(CASE WHEN T.ReturnID = T.sumbitID AND T.Statusa = 1 THEN 1 ELSE 0 END) AS Status FROM @table T GROUP BY T.ReturnID
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Knockout.js and mapping plugin not deep translating I use a json object to handle my menus and breadcrumbs. Now below you can see that the first two "Navigation" nodes are observable, but the last one isn't. It is just a regular array for some reason. Does the mapping plugin not deep clone the object?
Firebug output:
app.viewModel.members.layout().Navigation()[2].Navigation()[1].Navigation() <-- errors
Initialization:
app.viewModel.members.layout(ko.mapping.fromJS(json.Layout));
json.Layout JSON:
{
"Layout": {
"Navigation": [
{
"ID": "Dashboard",
"Type": "Menu",
"Route": "dashboard",
"Title": "Dashboard"
},
{
"ID": "Events",
"Type": "Menu",
"Route": "events",
"Title": "Events",
"Navigation": [
{
"ID": "AddEvent",
"Type": "Action",
"Route": "events/event",
"Title": "Add Event",
"Label": "+ Add Event",
"Order": "1"
},
{
"ID": "EditEvent",
"Type": "Item",
"Route": "events/event",
"Parameters": "eventid",
"Title": "Edit Event",
"Navigation": [
{
"ID": "EventGymCourts",
"Type": "Menu",
"Route": "events/event/gymcourts",
"Title": "Locations",
"Parameters": "eventid",
"Navigation": [
{
"ID": "AddEventGymCourt",
"Type": "Action",
"Route": "events/event/gymcourts/gymcourt",
"Title": "Add Location",
"Parameters": "eventid",
"Label": "+ Add Location",
"Order": "1"
},
{
"ID": "EditEventGymCourt",
"Type": "Item",
"Route": "events/event/gymcourts/gymcourt",
"Parameters": "eventid,gymcourtid",
"Title": "Edit Location"
}
]
},
{
"ID": "Teams",
"Type": "Menu",
"Route": "events/event/teams",
"Title": "Teams",
"Parameters": "eventid",
"Navigation": [
{
"ID": "AddTeam",
"Type": "Action",
"Route": "events/event/teams/team",
"Title": "Add Team",
"Parameters": "eventid",
"Label": "+ Add Team",
"Order": "1"
},
{
"ID": "EditTeam",
"Type": "Item",
"Route": "events/event/teams/team",
"Parameters": "eventid,teamid",
"Title": "Edit Team"
}
]
},
{
"ID": "Pools",
"Type": "Menu",
"Route": "events/event/pools",
"Title": "Pools",
"Parameters": "eventid",
"Navigation": [
{
"ID": "AddPool",
"Type": "Action",
"Route": "events/event/pools/pool",
"Title": "Add Pool",
"Parameters": "eventid",
"Label": "+ Add Pool",
"Order": "1"
},
{
"ID": "EditPool",
"Type": "Item",
"Route": "events/event/pools/pool",
"Parameters": "eventid,poolid",
"Title": "Edit Pool"
}
]
},
{
"ID": "Brackets",
"Type": "Menu",
"Route": "events/event/brackets",
"Title": "Brackets",
"Parameters": "eventid",
"Navigation": [
{
"ID": "AddBracket",
"Type": "Action",
"Route": "events/event/brackets/bracket",
"Title": "Add Bracket",
"Parameters": "eventid",
"Label": "+ Add Bracket",
"Order": "1"
},
{
"ID": "EditBracket",
"Type": "Item",
"Route": "events/event/brackets/bracket",
"Parameters": "eventid,bracketid",
"Title": "Edit Bracket"
}
]
}
]
}
]
},
{
"ID": "Gyms",
"Type": "Menu",
"Route": "gyms",
"Title": "Locations",
"Navigation": [
{
"ID": "AddGym",
"Type": "Action",
"Route": "gyms/gym",
"Title": "Add Location",
"Label": "+ Add Gym",
"Order": "1"
},
{
"ID": "EditGym",
"Type": "Item",
"Route": "gyms/gym",
"Parameters": "gymid",
"Title": "Edit Location",
"Navigation": {
"ID": "EditMap",
"Type": "Menu",
"Route": "gyms/gym/map",
"Parameters": "gymid",
"Title": "Map"
}
}
]
}
]
}
}
Update:
Looking closer it looks like since that "Navigation" has only one node, it makes it one object and not an array like the others. How could I remedy this? Using create in the mapping plugin?
A: Well I used the create method in the mapping plugin.
var mapping = {
'Navigation': {
create: function (options) {
if (options.data.Navigation) {
if (options.data.Navigation instanceof Array) {
options.data = ko.mapping.fromJS(options.data, mapping);
}
else{
options.data.Navigation = [options.data.Navigation];
}
}
return ko.mapping.fromJS(options.data);
}
}
};
app.viewModel.members.layout(ko.mapping.fromJS(json.Layout, mapping));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Ninject and private constructors We're using Ninject for IOC.
All of our Repository objects can (and should be) mocked for unit testing. I'd like to enforce that all developers code only to interfaces when interacting with Repositories. For this, I'd like to make the constructors private and create static accessor factory methods for construction:
public class SomeRepository : ISomeRepository
{
private SomeRepository()
{
}
public static ISomeRepository Create()
{
return StandardKernel.Get<ISomeRepository>();
}
}
Problem lies in this: how do I have Ninject create the actual object? I have repository interface and class in the same project
A: We're ultimately going with the following:
public class SomeRepository : ISomeRepository
{
private SomeRepository()
{
}
public static ISomeRepository CreateForIOC()
{
return new SomeRepository();
}
}
and during module loading, we're mapping the StandardKernel's ISomeRepository interface to CreateForIOC() method.
This does not stop developers from calling CreateForIOC directly, but at least forces them to a) code to an interface, b) realize that CreateForIOC() is probably not the right method to call when instantiating the object and at least ask a question about it from a lead developer
A: Instead of either the private constructor or the factory method, why not just have Ninject provide the the concrete repository to any objects that need it?
A: It looks like you're trying to use a singleton pattern. In general, the singleton pattern is considered an anti-pattern, largely because it hinders unit testing. Dependency injection allows you to create singletons via configuration without having to use the singleton pattern.
I would suggest that you instead, simply configure Ninject to create a single instance of your app without the private constructor.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Corrupted Git Branch after suspected hard disk issue So I was having a problem with my hard disk (I think). Everytime I ran git log commands (when running Windows through VMWare Fusion for Mac) it would throw a fatal error causing VMWare to crash (something about unable to reach disk drive). Even if I booted to Boot Camp natively and tried running git log, I got some error about less.exe failing.
I couldn't copy/delete any files in the git/bin folder without problems (VMWare crashing). So I tried to just install the latest git (1.6.7 from 1.6.4). After doing this, I could successfully run git log commands again and everything seemed to be working.
Before the original crash had occurred, I had a branch I was working on that has a lot of changes. Now when I try to checkout that branch, I get the following error:
error: unable to unpack d7a66a887fbe9b5f2baec0580da1fb4c1f39851e header
error: inflateEnd: failed
fatal: loose object d7a66a887fbe9b5f2baec0580da1fb4c1f39851e (stored in .git/objects/d7/a66a887fbe9b5f2baec0580da1fb4c1f39851e) is corrupt.
I get similiar error if I do git cat-file -t d7a66a887fbe9b5f2baec0580da1fb4c1f39851e. I saw this on another post. My branch had not been pushed to the network repository yet as I was only working on it locally. Any chance of recovering/fixing this branch? I'm desperate to not lose this code :(
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: jQuery understanding `wrap` I'm having some syntax understanding issues.
I'm trying to take a div, clone it, wrap the clone in two new generated divs, and then stick it all into the DOM right before the closing BODY tag. This is what I have:
$('.myDiv').click(function(){
var $myDiv = $(this).clone();
var $myWrapper1 = $('div').css('border','10px solid blue');
var $myWrapper2 = $('div').css('border','10px solid green');
$myDiv.wrap($myWrapper1).wrap($myWrapper2).appendTo('body');
});
jsbin live example: http://jsbin.com/upedox/4/
This isn't doing what I am expecting it to do, so clearly I'm not fully understanding wrap. What I want to happen is to end up with a copy of my div (red border) with two divs wrapping it (one with a green and blue border) and for that to then be inserted after the existing div.
What is instead happening is that the div gets cloned, but isn't wrapped with the other divs but, rather, it's content is just inserted into the last div I wrap it with (so I end up with a cloned green border div) and it sticks it above the existing div.
What am I misunderstanding with wrap?
A: As I explained, picture the following:
<div class="foo">Bar</div>
If you want to wrap foo in a bar div, you'd call it:
var $r = $('.foo').wrap($('<div>',{class:'bar'}));
Instead of $r resulting in <div class="bar"><div class="foo">Bar</div></div> (as you're expecting) what you're actually getting back is the original <div class="foo">Bar</div> (and inside the DOM it's been wrapped with <div class="bar"></div>).
So, instead (considering you're cloning it and not working directly in DOM) use append instead:
var $orig = $('.foo').clone();
var $firstWrap = $('<div>',{class:'bar'});
var $secondWrap = $('<div>',{class:'baz'});
var $r = $secondWrap.append($firstWrap.append($orig));
Now, the above has <div class="baz"><div class="bar"><div class="foo">Bar</div></div></div> inside (like you're desiring) which you can then appendTo('body')
live demo
A: $($myDiv).wrap($myWrapper1) returns $myDiv which doesn't include the wrapped element.
The code should look like below.
$('document').ready(function(){
$('.myDiv').click(function(){
var $myDiv = $(this).clone();
var $myWrapper1 = $('<div>').css('border','10px solid blue');
var $myWrapper2 = $('<div>').css('border','10px solid green');
$myDiv.wrap($myWrapper1).parent()
.wrap($myWrapper2).parent()
.appendTo('body');
});
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: .NET Handle to HWND Another basic problem - I can't convert a Process.MainWindowHandle to an HWND. I've tried using Process->MainWindowHandle.ToPointer() and passing that through, but still no love.
Here is a function that gets an HWND from a Point, and then passes that though, and it works perfectly.
void echoMouseObject() {
long vmID;
AccessibleContext acParent;
AccessibleContext ac;
POINT p;
HWND hwnd;
RECT windowRect;
GetCursorPos(&p);
hwnd = WindowFromPoint(p);
if (GetAccessibleContextFromHWND(hwnd, &vmID, &acParent)) {
GetWindowRect(hwnd, &windowRect);
// send the point in global coordinates; Java will handle it!
if (GetAccessibleContextAt(vmID, acParent, (jint) p.x, (jint) p.y, &ac)) {
displayAccessibleInfo(vmID, ac, p.x, p.y); // can handle null
ReleaseJavaObject(vmID, ac);
}
}
However, when I pass GetAccessibleContextFromHWND() in the following manner, where win_handle is declared in this manner:
HWND win_handle;
and it's assigned a value by:
Process^ p = gcnew Process();
p = getJavaProcess();
JA->setWindow((HWND)p->MainWindowHandle.ToPointer());
JA->test();
void JavaAccess::test(void)
{
long vm=0;
AccessibleContext* ac = new AccessibleContext();
BOOL t = GetAccessibleContextFromHWND(win_handle, &vm, ac);
AccessibleContextInfo* aci = new AccessibleContextInfo();
GetAccessibleContextInfo(vm, *ac, aci);
}
I get a false! The function fails to return a valid vmID, or accessibleContext. What on earth? :-S
getJavaProcess() is just a function that sorts through the Processes and returns the one that matches the criteria I've defined.
I have succesfully hooked the Java Access Bridge callbacks, and they return/trigger as expected, so I know that the Bridge is loading alright. I can also call getVersionInfo(vmID) from within a callback, and it works as expected. I'm so confused.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Backbone.js: get current route Using Backbone, is it possible for me to get the name of the current route? I know how to bind to route change events, but I'd like to be able to determine the current route at other times, in between changes.
A: If you use the root setting for the Router, you can also include it to get the 'real' fragment.
(Backbone.history.options.root || "") + "/" + Backbone.history.fragment
A: Here's a tad more verbose (or, depending on your taste, more readable) version of Simon's answer:
current: function () {
var fragment = Backbone.history.fragment,
routes = _.pairs(this.routes),
route,
name,
found;
found = _.find(routes, function (namedRoute) {
route = namedRoute[0];
name = namedRoute[1];
if (!_.isRegExp(route)) {
route = this._routeToRegExp(route);
}
return route.test(fragment);
}, this);
if (found) {
return {
name: name,
params: this._extractParameters(route, fragment),
fragment: fragment
};
}
}
A: Robert's answer is interesting, but sadly it will only work if the hash is exactly as defined in the route. If you for example have a route for user(/:uid) it won't be matched if the Backbone.history.fragment is either "user" or "user/1" (both which are the two most obvious use cases for such route). In other words, it'll only find the appropriate callback name if the hash is exactly "user(/:uid)" (highly unlikely).
Since i needed this functionality i extended the Backbone.Router with a current-function that reuses some of the code the History and Router object use to match the current fragment against the defined Routes for triggering the appropriate callback.
For my use case, it takes the optional parameter route, which if set to anything truthful will return the corresponding function name defined for the route. Otherwise it'll return the current hash-fragment from Backbone.History.fragment.
You can add the code to your existing Extend where you initialize and setup the Backbone router.
var Router = new Backbone.Router.extend({
// Pretty basic stuff
routes : {
"home" : "home",
"user(:/uid)" : "user",
"test" : "completelyDifferent"
},
home : function() {
// Home route
},
user : function(uid) {
// User route
},
// Gets the current route callback function name
// or current hash fragment
current : function(route){
if(route && Backbone.History.started) {
var Router = this,
// Get current fragment from Backbone.History
fragment = Backbone.history.fragment,
// Get current object of routes and convert to array-pairs
routes = _.pairs(Router.routes);
// Loop through array pairs and return
// array on first truthful match.
var matched = _.find(routes, function(handler) {
var route = handler[0];
// Convert the route to RegExp using the
// Backbone Router's internal convert
// function (if it already isn't a RegExp)
route = _.isRegExp(route) ? route : Router._routeToRegExp(route);
// Test the regexp against the current fragment
return route.test(fragment);
});
// Returns callback name or false if
// no matches are found
return matched ? matched[1] : false;
} else {
// Just return current hash fragment in History
return Backbone.history.fragment
}
}
});
// Example uses:
// Location: /home
// console.log(Router.current()) // Outputs 'home'
// Location: /user/1
// console.log(Router.current(true)) // Outputs 'user'
// Location: /user/2
// console.log(Router.current()) // Outputs 'user/2'
// Location: /test
// console.log(Router.current(true)) // Outputs 'completelyDifferent'
I'm sure some improvements could be made, but this is a good way to get you started. Also, it's easy to create this functionality without extending the Route-object. I did this because it was the most convenient way for my set-up.
I haven't tested this fully yet, so please let me know if anything goes south.
UPDATE 04/25/2013
I did some changes to the function, so instead of returning either the hash or route callback name, i return an object with fragment, params and route so you can access all the data from the current route, much like you would from the route-event.
You can see the changes below:
current : function() {
var Router = this,
fragment = Backbone.history.fragment,
routes = _.pairs(Router.routes),
route = null, params = null, matched;
matched = _.find(routes, function(handler) {
route = _.isRegExp(handler[0]) ? handler[0] : Router._routeToRegExp(handler[0]);
return route.test(fragment);
});
if(matched) {
// NEW: Extracts the params using the internal
// function _extractParameters
params = Router._extractParameters(route, fragment);
route = matched[1];
}
return {
route : route,
fragment : fragment,
params : params
};
}
See previous code for further comments and explanations, they look mostly the same.
A: If you look at the source for the Router, you'll see that when the router triggers the event saying that something changes, it passes the name with it as "route:name".
http://documentcloud.github.com/backbone/docs/backbone.html#section-84
You can always hook the "route" event on the router and store it to get the current route.
A: If you have instantiated a Router in your application, the following line returns the current fragment:
Backbone.history.getFragment();
From the Backbone.js documentation:
"
[...]
History serves as a global router (per frame) to handle hashchange events or pushState, match the appropriate route, and trigger callbacks. You shouldn't ever have to create one of these yourself — you should use the reference to Backbone.history that will be created for you automatically if you make use of Routers with routes.
[...]"
If you need the name of the function bound to that fragment, you can make something like this inside the scope of your Router:
alert( this.routes[Backbone.history.getFragment()] );
Or like this from outside your router:
alert( myRouter.routes[Backbone.history.getFragment()] );
A: To get the calling route (or url) from the called route handler, you can get it by checking
Backbone.history.location.href ... the full url
Backbone.history.location.search ... query string starting from ?
I got here in the search of this answer so I guess I should leave what I have found.
A: router = new Backbone.Router.extend({
routes: { "stuff/:id" : "stuff" },
stuff: function(id) {}
});
router.on('route', function(route) {
this.current = route;
});
Now if you navigate to /stuff/1, router.current will be "stuff"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "140"
}
|
Q: How to set Grouped UITableview Section names without a NSFetchedResultsController in play I have used a number of grouped tables tied to core data managed objects, where the sectionNameKeyPath value is used to identify the attribute in the data that should be used to denote sections for the table.
But how do I indicate the "sectionNameKeyPath equivalent" when I have a table that is being use to present an NSMutableArray full of objects that look like this:
@interface SimGrade : NSObject {
NSNumber * scoreValue;
NSString * commentInfo;
NSString * catName;
}
I would like to have sections defined according to the "catName" member of the class.
Consider, for example that my mutablearray has 5 entries where the 5 "catName" values are "Blue", "Blue", "Blue", "Red", and "Red". So I'd want the number of sections in the table for that example to be 2.
So, what I would 'like to do' could be represented by the following pseudo-code:
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
// Return the number of sections.
return (The number of unique catNames);
}
Note: My interest in doing this is not so much for displaying separate sections in the table, but rather so that I can more easily calculate the sums of scoreValues for each category.
<<<<< UPDATE >>>>>>>>>
Joshua's help, as documented in his response has been right on. Here are the two new handlers for number of sections and number of rows per section...
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
NSMutableSet *counter = [[NSMutableSet alloc] init];
[tableOfSimGrades enumerateObjectsUsingBlock:^(id object, NSUInteger idx, BOOL *stop) {
[counter addObject:[object catName]];
}];
NSInteger cnt = [counter count];
[counter release];
NSLog(@">>>>> number of sections is -> %d", cnt);
return cnt;
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
// Return the number of rows in the section.
NSMutableDictionary *counter = [[NSMutableDictionary alloc] init];
NSMutableArray *cats = [[NSMutableArray alloc] init];
__block NSNumber *countOfElements;
[tableOfSimGrades enumerateObjectsUsingBlock:^(id object, NSUInteger idx, BOOL *stop) {
// check the dictionary for they key, if it's not there we get nil
countOfElements = [counter objectForKey:[object catName]];
if (countOfElements) {
// NSNumbers can't do math, so we use ints.
int curcount = [countOfElements intValue];
curcount++;
[counter setObject:[NSNumber numberWithInt:curcount] forKey:[object catName]];
NSLog(@">>>> adding object %d to dict for cat: %@", curcount, [object catName]);
} else {
[counter setObject:[NSNumber numberWithInt:1] forKey:[object catName]];
[cats addObject:[object catName]];
NSLog(@">>>>> adding initial object to dict for cat: %@", [object catName]);
}
}];
countOfElements = [counter objectForKey:[cats objectAtIndex: section]];
int catcount = [countOfElements intValue];
[counter release];
[cats release];
return catcount;
}
My current issue with this routine now lies in the following function... It is ignorant of any sections in the nsmutableArray and so for each section, it starts at index 0 of the array instead of at the 0th element of the appropriate section.
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = [self.tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if (cell == nil) {
cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:CellIdentifier] autorelease];
}
// Configure the cell...
SimGrade *tmpGrade = [[SimGrade alloc] init];
tmpGrade = [tableOfSimGrades objectAtIndex: indexPath.row];
cell.detailTextLabel.text = [NSString stringWithFormat:@"Category: %@", tmpGrade.catName];
// [tmpGrade release];
return cell;
}
How do I transform the "indexpath" sent to this routine into the appropriate section of the mutableArray?
Thanks,
Phil
A: You could do something like this:
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
NSMutableSet *counter = [[NSMutableSet alloc] init];
[arrayOfSims enumerateObjectsUsingBlock:^(id object, NSUInteger idx, BOOL *stop) {
[counter addObject:object.catName];
}];
NSInteger cnt = [counter count];
[counter release];
return cnt;
}
you'd probably want to memoize that, for performance reasons (but only after profiling it).
--- EDIT ---
You can use an NSMutableDictionary, too, to get counts of individual categories.
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
NSMutableDictionary *counter = [[NSMutableDictionary alloc] init];
__block NSNumber *countOfElements;
[arrayOfSims enumerateObjectsUsingBlock:^(id object, NSUInteger idx, BOOL *stop) {
// check the dictionary for they key, if it's not there we get nil
countOfElements = [counter objectForKey:object.catName];
if (countOfElements) {
// NSNumbers can't do math, so we use ints.
int curcount = [countOfElements intValue];
curcount++;
[counter setObject:[NSNumber numberWithInt:curcount] forKey:object.catName];
} else {
[counter setObject:[NSNumber numberWithInt:1] forKey:object.catName];
}
}];
NSInteger cnt = [counter count];
// we can also get information about each category name, if we choose
[counter release];
return cnt;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: I'm thrilled that this scala snippet uses all of my processors to find the (correct) answer faster but... why does it do that? So I was messing around with some easy problems to get better at scala and I wrote the following program to calculate primes using an Eratosthenes's sieve. When I bump up the number of primes to find, I noticed that my cpu would max out during the calculation. Now I have no clue why it's using more than 1 core and I was afraid it would muck up the answer but it appears to be correct upon multiple runs so it must not be. I'm not using .par anywhere and most all of my logic is in for-comprehensions.
Edit: I'm using scala 2.9.1
object Main {
val MAX_PRIME = 10000000
def main(args: Array[String]) {
println("Generating array")
val primeChecks = scala.collection.mutable.ArrayBuffer.fill(MAX_PRIME + 1)(true)
primeChecks(0) = false
println("Finding primes")
for (
i ← 2 to MAX_PRIME if primeChecks(i);
j ← i * 2 to MAX_PRIME by i
) primeChecks(j) = false
println("Filtering primes")
val primes = for { (status, num) ← primeChecks.zipWithIndex if status } yield num
println("Found %d prime numbers!".format(primes.length))
println("Saving the primes")
val formatter = new java.util.Formatter("primes.txt", "UTF-8")
try {
for (prime ← primes)
formatter.format("%d%n", prime.asInstanceOf[Object])
}
finally {
try { formatter.close } catch { case _ ⇒ }
}
}
}
Edit 2: You can use the following snippet in a REPL to get the multi-threading behavior so therefore it has to be because of the for-comprehension (at least in scala 2.9.1).
val max = 10000000
val t = scala.collection.mutable.ArrayBuffer.fill(max + 1)(true)
for (
i <- 2 to max if t(i);
j <- i * 2 to max by i
) t(j) = false
A: It's not your code that's using multiple threads, it's the JVM that is. What you are seeing is the GC kicking in. If I increase MAX_PRIME to 1000000000 and give it 6Gb of Java stack to play with I can see a steady-state of 100% of 1 CPU and about 4Gb mem. Every so often the GC kicks in and it then uses 2 CPUs. The following Java stack trace (pruned for clarity) show what's running inside the JVM:
"Attach Listener" daemon prio=3 tid=0x0000000000d13800 nid=0xf waiting on condition [0x0000000000000000]
"Low Memory Detector" daemon prio=3 tid=0x0000000000a15000 nid=0xd runnable [0x0000000000000000]
"C2 CompilerThread1" daemon prio=3 tid=0x0000000000a11800 nid=0xc waiting on condition [0x0000000000000000]
"C2 CompilerThread0" daemon prio=3 tid=0x0000000000a0e800 nid=0xb waiting on condition [0x0000000000000000]
"Signal Dispatcher" daemon prio=3 tid=0x0000000000a0d000 nid=0xa runnable [0x0000000000000000]
"Finalizer" daemon prio=3 tid=0x00000000009e7000 nid=0x9 in Object.wait() [0xffffdd7fff6dd000]
"Reference Handler" daemon prio=3 tid=0x00000000009e5800 nid=0x8 in Object.wait() [0xffffdd7fff7de000]
"main" prio=3 tid=0x0000000000428800 nid=0x2 runnable [0xffffdd7fffa3d000]
java.lang.Thread.State: RUNNABLE
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:76)
"VM Thread" prio=3 tid=0x00000000009df800 nid=0x7 runnable
"GC task thread#0 (ParallelGC)" prio=3 tid=0x0000000000438800 nid=0x3 runnable
"GC task thread#1 (ParallelGC)" prio=3 tid=0x000000000043c000 nid=0x4 runnable
"GC task thread#2 (ParallelGC)" prio=3 tid=0x000000000043d800 nid=0x5 runnable
"GC task thread#3 (ParallelGC)" prio=3 tid=0x000000000043f800 nid=0x6 runnable
"VM Periodic Task Thread" prio=3 tid=0x0000000000a2f800 nid=0xe waiting on condition
There's only one thread (main) running Scala code, all the others are internal JVM ones. Note in particular there's 4 GC threads in this case - that's because I'm running this on a 4-way machine and by default the JVM will allocate 1 GC thread per core - the exact setup will depend on the particular mix of platform, JVM and command-line flags that are used.
If you want to understand the details (It's complicated!), the following links should get you started:
*
*Java SE 6 Performance White Paper
*Memory Management in the JavaHotSpot™ Virtual Machine
A: Update: Further testing with provided jar leads to multicore usage on OSX, Java 1.6.0_26, HotSpot Server VM, Scala 2.9.1.
If you're on a *nix-based system, this will say 90% and really only be using one core. It'll say 230% for 100% of 2 cores and 30% of another or any variation thereof.
For this code on my machine, the CPU usage bounces between 99% and 130%, the 130% when the garbage collector is running in the background.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Twitter suggestions returning 404? Running Twitter gem version 1.7.2, OSX Lion, Ruby 1.9.2.
In Rails console.. if you put:
client = Twitter::Client.new(:oauth_token => actual_token_here, :oauth_token_secret => actual_secret_here)
client.suggestions('entertainment')
It retrieves results from Twitter's API. But:
client = Twitter::Client.new
client.suggestions('entertainment')
Returns the same error:
Twitter::NotFound: GET https://api.twitter.com/1/users/suggestions/entertainment.json: 404: Can't find that category
https://dev.twitter.com/docs/api/1/get/users/suggestions/%3Aslug suggests that you don't need to be authenticated to make this API call.
So shouldn't it work without the tokens as well? The strange this is, I've asked someone else to do this on their dev machine and it works fine. I don't know where to proceed from here. How can I track where the problem is coming from?
This is the full trace:
Twitter::NotFound: GET https://api.twitter.com/1/users/suggestions/entertainment.json: 404: Can't find that category
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/twitter-1.7.2/lib/faraday/response/raise_http_4xx.rb:16:in `on_complete'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/faraday-0.7.4/lib/faraday/response.rb:9:in `block in call'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/faraday-0.7.4/lib/faraday/response.rb:62:in `on_complete'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/faraday-0.7.4/lib/faraday/response.rb:8:in `call'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/faraday-0.7.4/lib/faraday/request/url_encoded.rb:14:in `call'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/faraday-0.7.4/lib/faraday/request/multipart.rb:13:in `call'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/twitter-1.7.2/lib/faraday/request/multipart_with_file.rb:18:in `call'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/faraday-0.7.4/lib/faraday/connection.rb:203:in `run_request'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/faraday-0.7.4/lib/faraday/connection.rb:85:in `get'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/twitter-1.7.2/lib/twitter/request.rb:27:in `request'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/twitter-1.7.2/lib/twitter/request.rb:6:in `get'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/twitter-1.7.2/lib/twitter/client/user.rb:117:in `suggestions'
from (irb):2
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.1.0/lib/rails/commands/console.rb:45:in `start'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.1.0/lib/rails/commands/console.rb:8:in `start'
from /Users/Chris/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.1.0/lib/rails/commands.rb:40:in `<top (required)>'
from script/rails:6:in `require'
from script/rails:6:in
A: Errr. for some reason, I needed to specify the lang param. For example:
client.suggestions("entertainment", :lang => "en")
Works now!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: IEditableCollectionView loses selection on CommitEdit I have CollectionViewSource in which dates grouped by years and months. Dates are displayed in TreeView (accurately in RadTreeView).
The target is to change selected date without recreating the view (do not call Refresh method).
To do this I implemented IEditableObject in date view model and changed date so:
var selectedDate = SelectedDate;
var editableCollectionView = Dates.View as IEditableCollectionView;
if (null != editableCollectionView && !editableCollectionView.IsEditingItem)
{
editableCollectionView.EditItem(selectedDate);
selectedDate.Date = dt.Date;
editableCollectionView.CommitEdit();
}
But in this case TreeView lost selection and I need to select "selected item" again that leads refreshing data bounded to selected date.
How can I solve this issue? Perfectly using MVVM way.
UPDATE:
If date is alone in group, changing date causes collapsing item which contains it.
UPDATE 2:
May be I shouldn't use SelectedDate property and work only with IsSelected and IsExpanded?
A: Leverage MVVM for Tree View Item.
Include two writeable properties in your item level class (which serves as the data context to your individual tree view item)
*
*IsExpanded
*IsSelected
Have INotifyPropertyChanged implemented and property changed notification raised in the setter of the above two properties.
Now at TreeViewLevel have a Style that binds these properties.
<TreeView.Resources>
<Style TargetType="{x:Type TreeViewItem}">
<Setter Property="IsExpanded" Value="{Binding IsExpanded, Mode=TwoWay}" />
<Setter Property="IsSelected" Value="{Binding IsSelected, Mode=TwoWay}" />
....
</...>
This way the expansion and selection is maintained on the tree view as long as it is maintained in the tree view item's data context.
Now remember that expanded states can be true for multiple items but selection state true is applicable for only one item along the entire tree view.
Hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Array and if-else statement issue I have two forms, if $_POST['submit'] of the first form is click the second form is loaded and an array is assigned with $POST data of first form , else if the $_POST['submit'] of the second form is pressed the same array needs to have additional elements this time from $POST of second form, but the problem is that my array is getting empty as soon as the elseif is executed. I tried passing the array by reference but that did not help me.
if(isset(btn_Of_Form1)) {
echo "form2"; $my_arr =$_POST;
}
elseif(btn_of_Form2){
$my_arr =array_merge($my_arr,$_POST);
}
A: You can use $_SESSION and serialize:
session_start();
if(isset(btn_Of_Form1))
{
echo "form2";
$_SESSION['data'] = serialize($_POST);
}
elseif(btn_of_Form2)
{
$my_arr = unserialize($_SESSION['data']);
$my_arr = array_merge($my_arr,$_POST);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: jquery ui autocomplete: clicking on list does nothing i use jquery ui autocomplete and took the example from this link .
this is my modification:
$(function() {
$(".autocomplete").live('keyup.autocomplete', function() {
$(".autocomplete").autocomplete({
source: function(request, response) {
$.ajax({
'url': "http://localhost/project/index.php/product/search_data/",
'data': { 'txt_product_name': $('#txt_product_name').val()},
'dataType': "json",
'type': "POST",
'success': function(data){
response(data);
}
})
},
minLength: 2,
focus: function( event, ui ) {
$(".txt_product_id").val(ui.item.product_id);
return false;
},
select: function( event, ui ) {
$( ".txt_product_id" ).val(ui.item.product_id);
$( ".txt_product_code" ).val(ui.item.product_code);
$( ".txt_product_name" ).val(ui.item.product_name);
return false;
}
}).data("autocomplete" )._renderItem = function( ul, item ) {
return $( "<li></li>" )
.data( "item.autocomplete", item )
.append( "<a>" + item.product_code + "<br>" + item.product_name + "</a>" )
.appendTo(ul);
};
})
});
the code works fine. and for my own purpose, i revise _renderItem so the result will be displayed in colums:
data("autocomplete" )._renderItem = function(table, item) {
return $( "<tr></tr>" )
.data("item.autocomplete", item)
.append("<td><a>" + item.product_code + "</a></td><td><a>" + item.product_name + "</a></td>")
.appendTo(table);
};
the suggestion list shows up everytime i type my entries. but unlike before, clicking on the list does nothing to txt_product_id, txt_product_code and txt_product_name.
what should i do? any suggestions will be welcomed.
thanks in advance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Xcode 4.1 — how to clone an app to make n distinct copies I have an Xcode project that builds a tiny experimental C ocoa application, call it "foo.app", just as I want it. For evaluation purposes, I'd like to create 10 completely distinct copies, foo1.app, foo2.app, foo3.app ... foo10.app.
If I make two copies of foo.app, naming them fie.app and fum.app, when I run them, MacOS 10.7.x seems to identify them as identical. I'm fairly sure there's something besides the file name of an app that the OS uses for identification. (In fact, I'll be changing the file names to something completely different.) I think identification internally has to do with one or more of the user entries in Xcode's new project wizard. "Product Name" or a derivative thereof?
What I'm looking for is how to edit the right value in an existing project, to avoid going throughthe new project wizard 10 times and creating 10 different projects, if that's possible.
TIA
A: You need to change the bundle name and identifier in the project info.plist file. Specifically, the following entries:
<key>CFBundleDisplayName</key>
<string>${PRODUCT_NAME}</string>
<key>CFBundleExecutable</key>
<string>${EXECUTABLE_NAME}</string>
<key>CFBundleIdentifier</key>
<string>xxx.yyy.${PRODUCT_NAME:rfc1034identifier}</string>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to compare string or int type? if($_GET['choice'] == (int))
or
if($_GET['choice'] == (string))
I got an error.
A: All GET parameters are strings. If you want to be certain that it's an integer in the string then you should sanitize it.
A: To check if the string $_GET['choice'] may be represented as an integer, use ctype_digit(), eg
if (ctype_digit($_GET['choice'])) {
// integer
}
A: You're doing it wrong. Your example shows CASTING:
$var = (int)"15"; // casts the string 15 as an integer
If you want to compare if something is an INTEGER, you can use the is_int() function in PHP. There are other operators that will do this for strings, arrays, etc;
http://us2.php.net/manual/en/function.is-int.php
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I auto-increment a unique version number using ActiveRecord? I have a model License that needs to have a version number (an Integer) but I don't want that to be confused at all with the actual id.
I have a field version_number. What is the simplest way to tell ActiveRecord to automatically increment it on creation?
A: Use a before_create callback to set version_number to the last version + 1:
class License < ActiveRecord::Base
before_create :set_version
...
def set_version
license = License.last
current_version = license.nil? ? 0 : license.version_number
self.version_number = current_version + 1
end
...
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why isn't g++ tail call optimizing while gcc is? I wanted to check whether g++ supports tail calling so I wrote this simple program to check it: http://ideone.com/hnXHv
using namespace std;
size_t st;
void PrintStackTop(const std::string &type)
{
int stack_top;
if(st == 0) st = (size_t) &stack_top;
cout << "In " << type << " call version, the stack top is: " << (st - (size_t) &stack_top) << endl;
}
int TailCallFactorial(int n, int a = 1)
{
PrintStackTop("tail");
if(n < 2)
return a;
return TailCallFactorial(n - 1, n * a);
}
int NormalCallFactorial(int n)
{
PrintStackTop("normal");
if(n < 2)
return 1;
return NormalCallFactorial(n - 1) * n;
}
int main(int argc, char *argv[])
{
st = 0;
cout << TailCallFactorial(5) << endl;
st = 0;
cout << NormalCallFactorial(5) << endl;
return 0;
}
When I compiled it normally it seems g++ doesn't really notice any difference between the two versions:
> g++ main.cpp -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 48
In tail call version, the stack top is: 96
In tail call version, the stack top is: 144
In tail call version, the stack top is: 192
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 48
In normal call version, the stack top is: 96
In normal call version, the stack top is: 144
In normal call version, the stack top is: 192
120
The stack difference is 48 in both of them, while the tail call version needs one more
int. (Why?)
So I thought optimization might be handy:
> g++ -O2 main.cpp -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 80
In tail call version, the stack top is: 160
In tail call version, the stack top is: 240
In tail call version, the stack top is: 320
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 64
In normal call version, the stack top is: 128
In normal call version, the stack top is: 192
In normal call version, the stack top is: 256
120
The stack size increased in both cases, and while the compiler might think my CPU is slower than my memory (which its not anyway), I don't know why 80 bytes are necessary for a simple function. (Why is it?).
There tail call version also takes more space than the normal version, and its completely logical if an int has the size of 16 bytes. (no, I don't own a 128 bit CPU).
Now thinking what reason the compiler has not to tail call, I thought it might be exceptions, because they depend on the stack tightly. So I tried without exceptions:
> g++ -O2 -fno-exceptions main.cpp -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 64
In tail call version, the stack top is: 128
In tail call version, the stack top is: 192
In tail call version, the stack top is: 256
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 48
In normal call version, the stack top is: 96
In normal call version, the stack top is: 144
In normal call version, the stack top is: 192
120
Which cut the normal version back to non-optimized stack size, while the optimized one has 8 bytes over it. still an int is not 8 bytes.
I thought there is something I missed in c++ that needs the stack arranged so I tried c: http://ideone.com/tJPpc
Still no tail calling, but the stack is much smaller (32 bit each frame in both version).
Then I tried with optimization:
> gcc -O2 main.c -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
120
Not only it tail call optimized the first, it also tail call optimized the second!
Why doesn't g++ do tail call optimization while its clearly available in the platform? is there any way to force it?
A: I don't find the other answer satisfying because a local object has no effect on the stack once it's gone.
Here is a good article which mentions that the lifetime of local objects extends into the tail-called function. Tail call optimization requires destroying locals before relinquishing control, GCC will not apply it unless it is sure that no local object will be accessed by the tail call.
Lifetime analysis is hard though, and it looks like it's being done too conservatively. Setting a global pointer to reference a local disables TCO even if the local's lifetime (scope) ends before the tail call.
{
int x;
static int * p;
p = & x;
} // x is dead here, but the enclosing function still has TCO disabled.
This still doesn't seem to model what's happening, so I found another bug. Passing local to a parameter with a user-defined or non-trivial destructor also disables TCO. (Defining the destructor = delete allows TCO.)
std::string has a nontrivial destructor, so that's causing the issue here.
The workaround is to do these things in a nested function call, because lifetime analysis will then be able to tell that the object is dead by the tail call. But there's no need to forgo all C++ objects.
A: The original code with temporary std::string object is still tail recursive, since the destructor for that object is executed immediately after exit from PrintStackTop("");, so nothing should be executed after the recursive return statement.
However, there are two issues that lead to confusion of tail call optimization (TCO):
*
*the argument is passed by reference to the PrintStackTop function
*non-trivial destructor of std::string
It can be verified by custom class that each of those two issues is able to break TCO.
As it is noted in the previous answer by @Potatoswatter there is a workaround for both of those issues. It is enough to wrap call of PrintStackTop by another function to help the compiler to perform TCO even with temporary std::string:
void PrintStackTopTail()
{
PrintStackTop("tail");
}
int TailCallFactorial(int n, int a = 1)
{
PrintStackTopTail();
//...
}
Note that is not enough to limit the scope by enclosing { PrintStackTop("tail"); } in curly braces. It must be enclosed as a separate function.
Now it can be verified with g++ version 4.7.2 (compilation options -O2) that tail recursion is replaced by a loop.
The similar issue is observed in Pass-by-reference hinders gcc from tail call elimination
Note that printing (st - (size_t) &stack_top) is not enough to be sure that TCO is performed, for example with the optimization option -O3 the function TailCallFactorial is self inlined five times, so TailCallFactorial(5) is executed as a single function call, but the issue is revealed for larger argument values (for example for TailCallFactorial(15);). So, the TCO may be verified by reviewing assembly output generated with the -S flag.
A: Because you're passing a temporary std::string object to the PrintStackTop(std::string) function. This object is allocated on the stack and thus prevent the tail call optimization.
I modified your code:
void PrintStackTopStr(char const*const type)
{
int stack_top;
if(st == 0) st = (size_t) &stack_top;
cout << "In " << type << " call version, the stack top is: " << (st - (size_t) &stack_top) << endl;
}
int RealTailCallFactorial(int n, int a = 1)
{
PrintStackTopStr("tail");
if(n < 2)
return a;
return RealTailCallFactorial(n - 1, n * a);
}
Compile with: g++ -O2 -fno-exceptions -o tailcall tailcall.cpp
And it now uses the tail call optimisation. You can see it in action if you use the -S flag to produce the assembly:
L39:
imull %ebx, %esi
subl $1, %ebx
L38:
movl $LC2, (%esp)
call __Z16PrintStackTopStrPKc
cmpl $1, %ebx
jg L39
You see the recursive call inlined as a loop (jg L39).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Nodejs throwing TypeError('first argument must be a string, Array, or Buffer'); I am following a nodejs book which is online: http://nodebeginner.org/ and stuck at one of the section. In that section (http://nodebeginner.org/#head22), it requires me to create the following 4 files:
**index.js**:
var server = require("./server");
var router = require("./router");
var requestHandlers = require("./requestHandlers");
var handle = {};
handle["/"] = requestHandlers.start;
handle["/start"] = requestHandlers.start;
handle["/upload"] = requestHandlers.upload;
server.start(router.route, handle);
**requestHandlers.js**:
function start(){
console.log("Request handler 'start' was called.");
return "Hello start";
}
function upload(){
console.log("Request handler 'upload' was called.");
return "Hello Upload";
}
exports.start = start;
exports.upload = upload;
**router.js**:
function route(handle, pathname){
console.log("About to route a request for " + pathname);
if(typeof handle[pathname] === 'function'){
handle[pathname]();
}else{
console.log("No request handler found for " + pathname);
return "404 Not found";
}
}
exports.route = route;
**server.js**:
var http = require("http");
var url = require("url");
function start(route, handle){
function onRequest(request, response){
var pathname = url.parse(request.url).pathname;
console.log("Request for " + pathname + " received.");
response.writeHead(200, {"Content-Type":"text/plain"});
var content = route(handle, pathname);
response.write(content);
response.end();
}
http.createServer(onRequest).listen(8888);
console.log("Server has started.");
}
exports.start = start;
When I run, it returns me the following error:
Server has started. Request for / received. About to route a request
for / Request handler 'start' was called.
http2.js:598
throw new TypeError('first argument must be a string, Array, or
Buffer');
^ TypeError: first argument must be a string, Array, or
Buffer
at ServerResponse.write (http2.js:598:11)
at Server.onRequest (/var/www/node/server.js:11:12)
at Server.emit (events.js:70:17)
at HTTPParser.onIncoming (http2.js:1451:12)
at HTTPParser.onHeadersComplete (http2.js:108:31)
at Socket.ondata (http2.js:1347:22)
at TCP.onread (net_uv.js:309:27)
I could trace the error to server.js and when I commented out these 2 lines, it works:
var content = route(handle, pathname);
response.write(content);
Where am I doing it wrong?
A: You're forgetting to return the value on the 4th line of router.js
handle[pathname]();
It will work properly if you change it to:
return handle[pathname]();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How Does WordPress Handle Upgrades? Can anyone describe the pseudocode for how WordPress handles its upgrades? As in, you go into WordPress admin and choose to upgrade the version of WordPress.
I mean, does it use FTP APIs locally? Does it send credentials to another host which reconnects back with FTP APIs? Does it download files with Curl? Does it polyfill if a given API isn't there and go another route? Does it test file permissions to see which API to use?
I've got a client who wants something like this built into a web application unrelated to WordPress.
A: Have a look it wp-admin/includes/update.php and wp-admin/includes/class-wp-upgrader.php
And here for some explanation: http://tech.ipstenu.org/2011/how-the-wordpress-upgrade-works/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to cache history.back like Safari in other browsers? I want
history.back()
to be cached like Safari naturally does.
But this does not happen in other browsers
How can I implement safari like cache of history.back() in other browsers?
A: Your can cache the page resources in 'localStorage', but most modern browsers already do something similar(and better). Despite this native browser cache, the code generated from these resources takes a while to be calculated and applied.
You can give a little help to the browser structuring your website pages this way:
<script>
if(!localStorage[location.pathname]) {
//load this page from server
localStorage[location.pathname] = getGeneratedPage();
} else {
body.innerHTML = parseGeneratedPage(localStorage[location.pathname]);
}
</script>
This is just a VERY generic example. The getGeneratedPage could be a function which stores ONLY:
*
*The DOM tree after page load
*CSS rules matched for this page
*JS functions which have at least one listener
*Base64 Images(only recommended for small images or previews of big images)
*etc
Also, you can make a server-side version of this or something like Opera Turbo.
Well, there are countless ways to make your page load in the blink of an eye. Hope it helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7563992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: problem with nivo-slider in simpleModal The nivo-slider displays and works fine, the problem I'm having is that if you close the modal window, and then re-open it, the nivo-slider is then broken. It stays stuck on the first picture and all the buttons related to nivo-slider are unresponsive.
Is there a way I can append the way simpleModal closes so it won't break nivo-slider?
simpleModal
Nivo Slider
Note: This probably happens because nivo-slider is runs on pageload, however, simpleModal unloads it when the modal window is closed, so there's no way to reload itif you reopen the modal window.
So the solution to this would probably be changing simplemodal so that instead of unloading it's content when the window is closed, it simply hides it instead. The problem is that I don't know how to do that.
A: Since no one else was willing to answer my question, I dug through simpleModal's source code and did it myself.
I looked through the source until I found the function that it uses to close the modal window and I looked for the sections where it removed things and changed it to just hiding them.
Note: My fix will not work for people who have information in the modal window that is going to be changed by the user, but it will work for anyone using the window for static information like I am.
Simply apply these changes to the source code:
From: (this code starts on line 664 of the unmodified source of simpleModal version 1.4.1)
// if the data came from the DOM, put it back
if (s.d.placeholder) {
var ph = $('#simplemodal-placeholder');
// save changes to the data?
if (s.o.persist) {
// insert the (possibly) modified data back into the DOM
ph.replaceWith(s.d.data.removeClass('simplemodal-data').css('display', s.display));
}
else {
// remove the current and insert the original,
// unmodified data back into the DOM
s.d.data.hide().remove();
ph.replaceWith(s.d.orig);
}
}
else {
// otherwise, remove it
s.d.data.hide().remove();
}
// remove the remaining elements
s.d.container.hide().remove();
s.d.overlay.hide();
s.d.iframe && s.d.iframe.hide().remove();
setTimeout(function(){
// opera work-around
s.d.overlay.remove();
// reset the dialog object
s.d = {};
}, 10);
to:
// if the data came from the DOM, put it back
if (s.d.placeholder) {
var ph = $('#simplemodal-placeholder');
// save changes to the data?
if (s.o.persist) {
// insert the (possibly) modified data back into the DOM
ph.replaceWith(s.d.data.removeClass('simplemodal-data').css('display', s.display));
}
else {
// remove the current and insert the original,
// unmodified data back into the DOM
s.d.data.hide();
}
}
else {
// otherwise, remove it
s.d.data.hide();
}
// remove the remaining elements
s.d.container.hide();
s.d.overlay.hide();
s.d.iframe && s.d.iframe.hide();
setTimeout(function(){
// opera work-around
s.d.overlay.remove();
// reset the dialog object
s.d = {};
}, 10);
I only changed 4 lines, but now simpleModal only hides the content when you close the window, it doesn't unload it.
To get the unmodified source for simpleModal, just click on the link I have placed below to download it.
simpleModal
Note: That is the full, uncompressed source code, for the purpose of development. After you have finished editing it, I suggest you compress it by using this website: JavascriptCompressor
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What MySQL my.ini parameters should I change for a large table join and update? I have the following:
*
*TableA - 20M rows
*TableB - 500K rows
And many queries, in particular the following, take forever.
UPDATE
TableA AS A
INNER JOIN
TableB AS B
ON B.Value IS NOT NULL AND A.Key=B.Key
SET
A.Value = B.Value
WHERE
A.Value IS NULL;
I know my MySQL (MyISAM) is not configured for large tables and I'm sure it could use more of the available memory (4GB total) or CPUs (i7). What parameters in my.ini should I be looking at?
I've started with key_buffer_size = 1536M because tableA has a 1gb index.
A: For innodb
*
*innodb_buffer_pool_size set to approx 80% of memory you want available to mysql
*innodb_log_file_size * innodb_log_files_in_group should be large enough that changes will be written to disk no more than once a second
But easier still use the configuration wizard https://tools.percona.com/wizard
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: how does the compile() work in python? I have two code which really confused me.
def get_context():
__gc = globals()
__lc = locals()
def precompiler(code):
exec code in __lc
def compiler(script, scope):
return compile(script, scope, 'eval')
def executor(expr):
return eval(expr, __gc, __lc)
return precompiler, compiler, executor
maker1, compiler1, executor1 = get_context()
maker2, compiler2, executor2 = get_context()
maker1("abc = 123")
maker2("abc = 345")
expr1 = compiler1("abc == 123", "test.py")
print "executor1(abc == 123):", executor1(expr1)
print "executor2(abc == 123):", executor2(expr1)
the result is:
executor1(abc == 123): True
executor2(abc == 123): False
Why the compile execute in the closure only once and the byte-code could run in both?
And there is another code here:
def get_context():
__gc = globals()
__lc = locals()
test_var = 123
def compiler(script, scope):
return compile(script, scope, 'eval')
def executor(expr):
return eval(expr, __gc, __lc)
return compiler, executor
compiler1, executor1 = get_context()
compiler2, executor2 = get_context()
expr1 = compiler1("test_var == 123", "test.py")
print "executor1(test_var == 123):", executor1(expr1)
print "executor2(test_var == 123):", executor2(expr1)
the result is:
NameError: name 'test_var' is not defined
And how did this happen?
Why does the compile need to check the environment(variable or some others) of the closure while it is not dependent on the closure? This is what I confused!
A: In your first example, you are executing 'abc=123' in your first context, and 'abc=345' in your second context. So 'test_var==123' is true in your first context and false in your second context.
In your second example, you have caught an interesting situation where the interpreter has removed test_var from the context because test_var isn't referenced.
A: For your first question, compile just takes the python code and produces the bytecode. It it is not dependent in any way on the closure where you compiled it. Its not different then if you had produced say, a string. That string isn't permantely tied to the function where it was created and neither is the code object.
For your second question, locals() builds a dictionary of the local variables when it is called. Since you setup test_var after calling locals it doesn't have it. If you want test_var inside locals, you need to call it afterwards.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: get SENT headers in an XMLHttpRequest Trying to get the Request headers from the XHR object, but with no luck, is there a hidden method or property of that object that will expose the headers sent by the browser?
I already know how to set custom request headers and view the response headers, I'm looking to get a list of all REQUEST headers sent, ones created by the browser and my custom ones.
I'm using webkit/chrome, don't care about other browsers.
EDIT: I'm not looking to monitor the request, I'm building a web app and I need to list those headers and display them within the app, please don't tell me about fiddler, firebug and chrome tools, that's not what I'm looking for.
A: There is no method in the XMLHttpRequest API to get the sent request headers. There are methods to get the response headers only, and set request headers.
You'll have to either have the server echo the headers, or use a packet sniffer like Wireshark.
A: Try using Fiddler Web Debugger.
http://www.fiddler2.com/fiddler2/
You can capture the request that was sent in any browser as well as inspect the request headers, response headers, and even copy a capture sent request and send it out as your own.
A: Assuming you are using jQuery, and you're looking for anything attached, but maybe not ALL headers sent, this could help. Not sure if it meets your exact needs, (since the browser tends to add its own things), but if you need to grab your own headers first, this works:
$.ajaxSetup({
beforeSend: function (jqXHR, settings) {
if(!(settings.headers && settings.headers.token)) {
//no token header was set, so set the request header
jqXHR.setRequestHeader('token', newtoken);
}
}
})
A: As the name suggests, sent headers were SENT, (duh)! And the XMLHttpRequest class doesn't store sent headers in RAM or put sent headers in an instance of XMLHttpRequest... which is good for performance.
If you want to get the headers that have been sent, all you need to do is to create a log mechanism.
And since custom request header are created through XMLHttpRequest.prototype.setRequestHeader(), You need to intercept XMLHttpRequest.prototype.setRequestHeader();
var sentHeaders = {}
var originalXMLHttpRequest_setRequestHeader = XMLHttpRequest.prototype.setRequestHeader;
XMLHttpRequest.prototype.setRequestHeader = function(header, value) {
sentHeaders[header] = value;
originalXMLHttpRequest_setRequestHeader.call(this, header, value);
}
That's all. No need for external library nor Wireshark. All done within Javascript;
Just make sure the intercept code above executed before any XMLHttpRequest initialization.
Ps. this code will obviously only intercept the custom header created through setRequestHeader(). The XMLHttpRequest itself will set some default headers which can't be accessed through this method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
}
|
Q: Can I use an NSDictionaryController for a dictionary of dictionaries? I have a plist file that holds information I need to display in an app organised as a dictionary of dictionaries. I've just started programming Cocoa so am not sure the best way to go about this. Obviously I can do it all manually, and code up the loops and add the data to the UI elements, but it seems to me that bindings and the provided controllers should let me do this more easily.
I was specifically wondering if there was a direct way (e.g. using mostly Interface Builder) to link the NSDictionary I get from reading the plist file, that itself contains further NSDictionary elements, which in turn contain name-value string pairs, to an appropriate user interface element -- probably an outline view or a browser.
Alternatively, the data would fit into a function browser type panel (like in Excel) where the top level keys are categories of functions, the next level are functions in that category, and I can just populate a text area with the lowest-level details -- i.e. the value data from the final dictionary.
A: I don't think you are going to be able to do this with an NSBrowser or NSOutlineView. The reason I say that is because if you are using bindings with those views you need to use an NSTreeController. NSTreeController provides the ability to specify which keys in your model indicate whether or not the current object has children objects (isLeaf) and how to access the children objects (children).
So if you are going to use one of those two views, you must be able to add additional keys and properties to your model to do so. Many times when I work with NSOutlineView and NSBrowser I find it easiest to skip bindings altogether and just use the all delegate & datasource methods. They require more code but they aren't hard to put together and sometimes I prefer them to bindings if my data model is complex or if the data is not in a format that is easily pumped through an NSTreeController.
However you could use an NSTableView by doing the following.
*
*Create an NSDictionaryController in your NIB.
*In the controller that reads in your plist, create an outlet for the NSDictionaryController and hookup the outlet using Interface Builder.
*In the code that reads the plist, add an additional line of code that set's the NSDictionaryController's content to the root dictionary from the plist.
*In your NIB, create an NSArrayController. Bind the array controller's "Content Array" binding to the NSDictionaryController. For the "Controller Key" binding property, specify "arrangedObjects".
*Now take an NSTableView and place it in your NIB. Bind each of the NSTableColumn's "Value" bindings to the NSArrayController and for the "Controller Key" binding property, specify the key from the dictionary whose value you want to display in the table column.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is a memory warning received on the main thread in iOS? I have seen some unusual behavior when my app responds to memory warnings - data getting out of sync primarily.
If my app receives a memory warning, will the warning pass control to the main thread?
If not, I assume I must do some data protection if the memory warning will free data that might be in use on the main thread.
A: Given that the recommended use of the -didReceiveMemoryWarning method is to toss out views, and given that views should only be manipulated from the main thread, it is safe to assume that the method will only be ever invoked on the main thread.
If you ever find that this is not the case, or you would like this to be explicitly stated in the documentation, please file an enhancement request.
A: I'm pretty sure that -didReceiveMemoryWarning will only be called on the main thread.
Regardless, this is what you can do to ensure this without (potentially) deadlocking:
void invokeBlockOnMainThread(dispatch_block_t block) {
if([NSThread isMainThread]) {
block();
return;
}
dispatch_sync(dispatch_get_main_queue(), block);
}
Call this function within -didReceiveMemoryWarning, passing in a block with everything that you need done and then you are guaranteed to be on the main thread while executing the code in the passed-in block.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Cannot convert uint* to uint[] I have this code which doesn't compile:
public struct MyStruct
{
private fixed uint myUints[32];
public uint[] MyUints
{
get
{
return this.myUints;
}
set
{
this.myUints = value;
}
}
}
Now, I know why the code won't compile, but I'm apparently at the point where I'm too tired to think, and need some help to put me in the right direction. I haven't worked with unsafe code in a while, but I'm pretty sure I need to do an Array.Copy (or Buffer.BlockCopy?) and return a copy of the array, however those don't take the arguments I need. What am I forgetting about?
Thanks.
A: You have to work in a fixed context when working with a fixed buffer:
public unsafe struct MyStruct {
private fixed uint myUints[32];
public uint[] MyUints {
get {
uint[] array = new uint[32];
fixed (uint* p = myUints) {
for (int i = 0; i < 32; i++) {
array[i] = p[i];
}
}
return array;
}
set {
fixed (uint* p = myUints) {
for (int i = 0; i < 32; i++) {
p[i] = value[i];
}
}
}
}
}
A: There may be an easier solution, but this works:
public unsafe struct MyStruct
{
private fixed uint myUints[32];
public uint[] MyUints
{
get
{
fixed (uint* ptr = myUints)
{
uint[] result = new uint[32];
for (int i = 0; i < result.Length; i++)
result[i] = ptr[i];
return result;
}
}
set
{
// TODO: make sure value's length is 32
fixed (uint* ptr = myUints)
{
for (int i = 0; i < value.Length; i++)
ptr[i] = value[i];
}
}
}
}
A: This works with int, float, byte, char and double only, but you can use Marshal.Copy() to move data from fixed array to managed array.
Example:
class Program
{
static void Main(string[] args)
{
MyStruct s = new MyStruct();
s.MyUints = new int[] {
1, 2, 3, 4, 5, 6, 7, 8,
9, 10, 11, 12, 13, 14, 15, 16,
1, 2, 3, 4, 5, 6, 7, 8,
9, 10, 11, 12, 13, 14, 15, 16 };
int[] chk = s.MyUints;
// chk containts the proper values
}
}
public unsafe struct MyStruct
{
public const int Count = 32; //array size const
fixed int myUints[Count];
public int[] MyUints
{
get
{
int[] res = new int[Count];
fixed (int* ptr = myUints)
{
Marshal.Copy(new IntPtr(ptr), res, 0, Count);
}
return res;
}
set
{
fixed (int* ptr = myUints)
{
Marshal.Copy(value, 0, new IntPtr(ptr), Count);
}
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: To shard or not to shard? GAE/java/jdo I'm currently porting some work from MySQL to Google App Engine/Java. I'm using JDO, as well as the lower level java API where required.
I read through the optimization guide about sharding counters: http://code.google.com/appengine/articles/sharding_counters.html
I'm still building the foundation of my app. I know that premature optimization is the root of all evil; but this is clearly documented in order to avoid contention. So I'm having trouble deciding if I should be biased one way or the other.
So should I be sharding counters (and other possibly higher frequency write operation objects) by default, or should I go forward without sharding and implement on an as needed basis?
A: The salient meaning of "premature" here is "before the proper time." Designing to avoid limits, when those limits are well understood, is not premature.
Shard your counters.
A: Even with effective sharding, maintaining aggregates can add some substantial load to your application. If you need that aggregate, and you can't afford an approximation; then using a sharded aggregate is not a premature optimization; there is no next best alternative. If you don't actually need the counter, then the time it will take to implement it could be better spent elsewhere.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: RVM doesn't seem to play well with ruby-1.9.3-preview1 I installed the ruby-1.9.3-preview1 after updating my rvm by following the steps below:
» rvm get head
» rvm reload
» rvm install ruby-1.9.3-preview1
Then I create a gem set for my project and try to use it.
» rvm --create ruby-1.9.3@myproject
» rvm use ruby-1.9.3@myproject
I test it out by:
» ruby -v
ruby 1.9.3dev (2011-07-31 revision 32789) [x86_64-darwin11.1.0]
But then when I try using it:
» bundle exec rails server
/Users/<home>/.rvm/gems/ruby-1.9.2-p290@envision/gems/activesupport-3.0.7/lib/active_support/dependencies.rb:239:in `block in require': iconv will be deprecated in the future, use String#encode instead.
=> Booting WEBrick
it still seems to be using the older version of Ruby. Did anyone else face this issue or am I doing something wrong?
EDIT
I originally intended to install the latest ruby-1.9.3-rc1. I switched to using that, by following the same steps above, and still have the same issue.
A: I think I found the issue, which seems completely unrelated to rvm or ruby-1.9.3. There was a problem with my bundler config. I hope this helps anyone who is surprised by seeing there gem sets not being used.
So,
» bundle config
Settings are listed in order of priority. The top value will be used.
path
Set for the current user (/Users/<home>/.bundle/config): "/Users/<home>/.rvm/gems/ruby-1.9.2-p290@myproject"
….
which meant it would always use the gem set under the path by default.
» bundle config path ''
seems to fix the issue. I am sure there a better way to remove any config overrides on the bundle config default by an explicit remove. But so far this worked and I have my new gem set with 1-.9.3-rc1 being used. Unfortunately not all my gems are compiling with 1.9.3-rc1, specifically an issue with gherkin-2.2.9. Let me know if someone got it work. I guess this a different question.
EDIT
» bundle config path ''
Doing that is a bad idea. I realized my mistake soon as this will default the current directly for creating your gemset.
» bundle config path $GEM_HOME
The above is better, after making sure GEM_HOME points to …/ruby-1.9.3-rc1@myproject . So yes I would still like to know how to let bundle config use the defaults so I don't have to change it myself. I tried editing .bundle/config with no luck under the myproject directory.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Maps, Facebook, Email, Youtube Errors in Android Eclipse Simulator I am getting the errors within the simulator (1.6 version) when I click and try and access the information.
I have created a screen capture of all the errors in the simulator.
I looked into some of the coding and a lot of the features say:
//showLocationMap
public void showLocationMap(){
It looks like a lot of the features say void. Maybe that is the issue? So how do I change that so that the feature is allowed?
A: You must download and install the gapps and arm translation in your simulator or try to download and install the gennymotion for android simulators and install the gapps in your simulator. see the link
http://forum.xda-developers.com/showthread.php?t=2528952
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Android HttpPost with Gzip and NameValuePair Is it possible to set 2 entities for a HttpPost? Like:
HttpPost post = new HttpPost("http://www.abc.com");
List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(2);
nameValuePairs.add(new BasicNameValuePair("A",
a));
nameValuePairs.add(new BasicNameValuePair("B", b));
post.setEntity(new UrlEncodedFormEntity(nameValuePairs, "UTF-8"));
post.setHeader("Accept-Encoding", "gzip");
ByteArrayEntity bae = new ByteArrayEntity(compress(json));
post.setEntity(bae);
HttpResponse resp;
resp = client.execute(post);
I'm trying to achieve telling the server that there are some parameters and a zip file.
A: yes You can send zip file and pass parameter using nameValuePairs. go to below link you may get your solution.
http://vikaskanani.wordpress.com/2011/01/11/android-upload-image-or-file-using-http-post-multi-part/
Android upload multi files to server via http post
in this link place your zip file address on place of image. and you may have to do some more modification.
A: Not like this. You need use a multi-part entity, you can manually encode it if it is relatively simple, or use org.apache.http.entity.mime.MultipartEntity (which is not part of the Android SDK). There are multiple post about it on SO, search for 'android multipart'.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why don't Funcs accept more than 16 arguments? Since Javascript is the language that I am the most proficient at, I am familiar with using functions as first-class objects. I had thought that C# lacked this feature, but then I heard about Func and Action and delegate, which I think are pretty awesomesauce.
For example, you can declare a Func that concatenates two strings and puts a space between them like this:
Func<string, string, string> concat = (a,b) => a + " " + b;
I noticed that when you type
Func<
the IntelliSense shows that it has 17 overloads:
delegate System.Func<out TResult>
delegate System.Func<in T, out TResult>
delegate System.Func<in T1, in T2, out TResult>
...snip...
delegate System.Func<in T1, in T2, in T3, in T4, in T5, in T6, in T7, in T8, in T9, in T10, in T11, in T12, in T13, in T14, in T15, in T16, out TResult>
That made me laugh. I looked at the MSDN docs for Func and laughed again. This made me try to declare a Func with 17 arguments. It causes an error (Using the generic type 'System.Func<TResult>' requires 1 type arguments).
I can agree that it's probably not a good idea to have a Func that accepts more than 16 arguments. Even so, this seems like a kludgy way for Func to be implemented. It requires 17 trivially different overloads to be documented. This is all it really should need to know: the last type parameter is the return type, and all the type parameters before it are the argument types.
So what could I do if I wanted to create a Func with more than 16 parameters? Why is there a limit anyway? Why can't C# just let you declare a Func with an arbitrary number of arguments?
A: I think I understand - what you can do with JavaScript and functions (arguments) is preaty neat but it's also not statically typed.
But please note that you never need more than one argument in functional programming anyway. You can chain as much argument as you like by returning another function (this is a common trait in FP and heavaly used with curring a technique also avaiable in JS but only with bending the system a bit).
Of course this is ackward in C#:
Func<A1,Func<A2,Func<A3,...<Func<An,Result>>...>
x1 =>
(x2 =>
(x3 =>
...
(xn =>
{ /*return soomething */ }
))...);
but this is what F# is for ;) and of course you should never make a function with more than a few arguments (way below 16!) anyhow.
A: You're hoping for something like variadic type arguments which C# lacks. C# requires the arity of generic types to be fixed, therefore the heinous proliferation of Func, Action, and Tuple types.
If you're language shopping, this feature was added in C++11, but you should probably just use jQuery. :-)
A: You can create your own delegate with more than 16 arguments. Or you can use Tuple<T1, T2, T3, T4, T5, T6, T7, TRest> (or any other data structure) as parameter.
A: You can just define any delegate you need. So a Func with 20 parameters would be defined like this:
public delegate R Func<
P0, P1, P2, P3, P4, P5, P6, P7, P8, P9,
P10, P11, P12, P13, P14, P15, P16, P17, P18, P19, R>(
P0 p0, P1 p1, P2 p2, P3 p3, P4 p4,
P5 p5, P6 p6, P7 p7, P8 p8, P9 p9,
P10 p10, P11 p11, P12 p12, P13 p13, P14 p14,
P15 p15, P16 p16, P17 p17, P18 p18, P19 p19);
You could then use it like this:
Func<
int, int, int, int, int, int, int, int, int, int,
int, int, int, int, int, int, int, int, int, int, int> f = (
p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, p10,
p11, p12, p13, p14, p15, p16, p17, p18, p19) =>
p0 + p1 + p2 + p3 + p4 + p5 + p6 + p7 + p8 + p9 + p10
+ p11 + p12 + p13 + p14 + p15 + p16 + p17 + p18 + p19;
var r = f(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1);
C# also lets you use lambda syntax on any delegate, so you could also do this:
public delegate R ArrayFunc<P, R>(params P[] parameters);
And then use it like so:
ArrayFunc<int, int> af = ps => ps.Sum();
var ar = af(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1);
It's a very flexible and powerful feature of the language.
A: Why is there a limit anyway?
There is opinion that function probably should not have more than 3 arguments. If it has more, it becomes increasingly harder to understand. Of course this may not be the reason why it is this way in C#, but this limitation may not be such a bad thing after all.
I would argue that even this limit of 16 is way too much and is encouraging bad design choices already.
A: System.Func delegates are probably there thanks to the BCL team.. who realised including a finite number of predefined generic delegates would be handy (and even required for a lot of situations).
To do what you say.. i.e. unlimited number of generic parameters for a Func delegate would require a language change.. the responsiblity would lie with both c# and vb.net teams (and the others probably) to change the language to allow this.
Maybe, at some point, if the benefit of this feature outweighs the cost of predefining a handful of Func delegates and this is more important than other language changes (and that it isn't a breaking change) the relevent teams my implement unlimited generic parameters.. might not be for a while though!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: Testing and design fee for MFi devices We want to obtain MFi certificate and make products work with iPhone/iPad.
@Brad Larson gave a very good answer to MFi application. But I need some more specific answer.
Looking for experiences on the Apple MFi program registration process
*
*How much is the testing fee paid to a 3rd-party?
*Does the mentioned "20,000$ ~ 80,000$ design fee" include the testing fee in question 1?
*Does the design fee necessary or can we do design all by ourselves?
Thanks a lot!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Difference between char* and char[] I know this is a very basic question. I am confused as to why and how are the following different.
char str[] = "Test";
char *str = "Test";
A: One is pointer and one is array. They are different type of data.
int main ()
{
char str1[] = "Test";
char *str2 = "Test";
cout << "sizeof array " << sizeof(str1) << endl;
cout << "sizeof pointer " << sizeof(str2) << endl;
}
output
sizeof array 5
sizeof pointer 4
A: The first
char str[] = "Test";
is an array of five characters, initialized with the value "Test" plus the null terminator '\0'.
The second
char *str = "Test";
is a pointer to the memory location of the literal string "Test".
A: char str[] = "Test";
Is an array of chars, initialized with the contents from "Test", while
char *str = "Test";
is a pointer to the literal (const) string "Test".
The main difference between them is that the first is an array and the other one is a pointer. The array owns its contents, which happen to be a copy of "Test", while the pointer simply refers to the contents of the string (which in this case is immutable).
A: Starting from C++11, the second expression is now invalid and must be written:
const char *str = "Test";
The relevant section of the standard is Appendix C section 1.1:
Change: String literals made const
The type of a string literal is changed from “array of char” to “array
of const char.” The type of a char16_t string literal is changed from
“array of some-integer-type” to “array of const char16_t.” The type of
a char32_t string literal is changed from “array of some-integer-type”
to “array of const char32_t.” The type of a wide string literal is
changed from “array of wchar_t” to “array of const wchar_t.”
Rationale: This avoids calling an inappropriate overloaded function,
which might expect to be able to modify its argument.
Effect on original feature: Change to semantics of well-defined feature.
A: The diference is the STACK memory used.
For example when programming for microcontrollers where very little memory for the stack is allocated, makes a big difference.
char a[] = "string"; // the compiler puts {'s','t','r','i','n','g', 0} onto STACK
char *a = "string"; // the compiler puts just the pointer onto STACK
// and {'s','t','r','i','n','g',0} in static memory area.
A: "Test" is an array of five characters (4 letters, plus the null terminator.
char str1[] = "Test"; creates that array of 5 characters, and names it str1. You can modify the contents of that array as much as you like, e.g. str1[0] = 'B';
char *str2 = "Test"; creates that array of 5 characters, doesn't name it, and also creates a pointer named str2. It sets str2 to point at that array of 5 characters. You can follow the pointer to modify the array as much as you like, e.g. str2[0] = 'B'; or *str2 = 'B';. You can even reassign that pointer to point someplace else, e.g. str2 = "other";.
An array is the text in quotes. The pointer merely points at it. You can do a lot of similar things with each, but they are different:
char str_arr[] = "Test";
char *strp = "Test";
// modify
str_arr[0] = 'B'; // ok, str_arr is now "Best"
strp[0] = 'W'; // ok, strp now points at "West"
*strp = 'L'; // ok, strp now points at "Lest"
// point to another string
char another[] = "another string";
str_arr = another; // compilation error. you cannot reassign an array
strp = another; // ok, strp now points at "another string"
// size
std::cout << sizeof(str_arr) << '\n'; // prints 5, because str_arr is five bytes
std::cout << sizeof(strp) << '\n'; // prints 4, because strp is a pointer
for that last part, note that sizeof(strp) is going to vary based on architecture. On a 32-bit machine, it will be 4 bytes, on a 64-bit machine it will be 8 bytes.
A: A pointer can be re-pointed to something else:
char foo[] = "foo";
char bar[] = "bar";
char *str = foo; // str points to 'f'
str = bar; // Now str points to 'b'
++str; // Now str points to 'a'
The last example of incrementing the pointer shows that you can easily iterate over the contents of a string, one element at a time.
A: Let's take a look at the following ways to declare a string:
char name0 = 'abcd'; // cannot be anything longer than 4 letters (larger causes error)
cout << sizeof(name0) << endl; // using 1 byte to store
char name1[]="abcdefghijklmnopqrstuvwxyz"; // can represent very long strings
cout << sizeof(name1) << endl; // use large stack memory
char* name2 = "abcdefghijklmnopqrstuvwxyz"; // can represent very long strings
cout << sizeof(name2) << endl; // but use only 8 bytes
We could see that declaring string using char* variable_name seems the best way! It does the job with minimum stack memory required.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
}
|
Q: How are integration tests written for interacting with external API? First up, where my knowledge is at:
Unit Tests are those which test a small piece of code (single methods, mostly).
Integration Tests are those which test the interaction between multiple areas of code (which hopefully already have their own Unit Tests). Sometimes, parts of the code under test requires other code to act in a particular way. This is where Mocks & Stubs come in. So, we mock/stub out a part of the code to perform very specifically. This allows our Integration Test to run predictably without side effects.
All tests should be able to be run stand-alone without data sharing. If data sharing is necessary, this is a sign the system isn't decoupled enough.
Next up, the situation I am facing:
When interacting with an external API (specifically, a RESTful API that will modify live data with a POST request), I understand we can (should?) mock out the interaction with that API (more eloquently stated in this answer) for an Integration Test. I also understand we can Unit Test the individual components of interacting with that API (constructing the request, parsing the result, throwing errors, etc). What I don't get is how to actually go about this.
So, finally: My question(s).
How do I test my interaction with an external API that has side effects?
A perfect example is Google's Content API for shopping. To be able to perform the task at hand, it requires a decent amount of prep work, then performing the actual request, then analysing the return value. Some of this is without any 'sandbox' environment.
The code to do this generally has quite a few layers of abstraction, something like:
<?php
class Request
{
public function setUrl(..){ /* ... */ }
public function setData(..){ /* ... */ }
public function setHeaders(..){ /* ... */ }
public function execute(..){
// Do some CURL request or some-such
}
public function wasSuccessful(){
// some test to see if the CURL request was successful
}
}
class GoogleAPIRequest
{
private $request;
abstract protected function getUrl();
abstract protected function getData();
public function __construct() {
$this->request = new Request();
$this->request->setUrl($this->getUrl());
$this->request->setData($this->getData());
$this->request->setHeaders($this->getHeaders());
}
public function doRequest() {
$this->request->execute();
}
public function wasSuccessful() {
return ($this->request->wasSuccessful() && $this->parseResult());
}
private function parseResult() {
// return false when result can't be parsed
}
protected function getHeaders() {
// return some GoogleAPI specific headers
}
}
class CreateSubAccountRequest extends GoogleAPIRequest
{
private $dataObject;
public function __construct($dataObject) {
parent::__construct();
$this->dataObject = $dataObject;
}
protected function getUrl() {
return "http://...";
}
protected function getData() {
return $this->dataObject->getSomeValue();
}
}
class aTest
{
public function testTheRequest() {
$dataObject = getSomeDataObject(..);
$request = new CreateSubAccountRequest($dataObject);
$request->doRequest();
$this->assertTrue($request->wasSuccessful());
}
}
?>
Note: This is a PHP5 / PHPUnit example
Given that testTheRequest is the method called by the test suite, the example will execute a live request.
Now, this live request will (hopefully, provided everything went well) do a POST request that has the side effect of altering live data.
Is this acceptable? What alternatives do I have? I can't see a way to mock out the Request object for the test. And even if I did, it would mean setting up results / entry points for every possible code path that Google's API accepts (which in this case would have to be found by trial and error), but would allow me the use of fixtures.
A further extension is when certain requests rely on certain data being Live already. Using the Google Content API as an example again, to add a Data Feed to a Sub Account, the Sub Account must already exist.
One approach I can think of is the following steps;
*
*In testCreateAccount
*
*Create a sub-account
*Assert the sub-account was created
*Delete the sub-account
*Have testCreateDataFeed depend on testCreateAccount not having any errors
*
*In testCreateDataFeed, create a new account
*Create the data feed
*Assert the data feed was created
*Delete the data feed
*Delete the sub-account
This then raises the further question; how do I test the deletion of accounts / data feeds? testCreateDataFeed feels dirty to me - What if creating the data feed fails? The test fails, therefore the sub-account is never deleted... I can't test deletion without creation, so do I write another test (testDeleteAccount) that relies on testCreateAccount before creating then deleting an account of its own (since data shouldn't be shared between tests).
In Summary
*
*How do I test interacting with an external API that effects live data?
*How can I mock / stub objects in an Integration test when they're hidden behind layers of abstraction?
*What do I do when a test fails and the live data is left in an inconsistent state?
*How in code do I actually go about doing all this?
Related:
*
*How can mocking external services improve unit tests?
*Writing unit tests for a REST-ful API
A: This is more an additional answer to the one already given:
Looking through your code, the class GoogleAPIRequest has a hard-encoded dependency of class Request. This prevents you from testing it independently from the request class, so you can't mock the request.
You need to make the request injectable, so you can change it to a mock while testing. That done, no real API HTTP requests are send, the live data is not changed and you can test much quicker.
A: I've recently had to update a library because the api it connects to was updated.
My knowledge isn't enough to explain in detail, but i learnt a great deal from looking at the code. https://github.com/gridiron-guru/FantasyDataAPI
You can submit a request as you would normally to the api and then save that response as a json file, you can then use that as a mock.
Have a look at the tests in this library which connects to an api using Guzzle.
It mocks responses from the api, there's a good deal of information in the docs on how the testing works it might give you an idea of how to go about it.
but basically you do a manual call to the api along with any parameters you need, and save the response as a json file.
When you write your test for the api call, send along the same parameters and get it to load in the mock rather than using the live api, you can then test the data in the mock you created contains the expected values.
My Updated version of the api in question can be found here.
Updated Repo
A: One of the ways to test out external APIs is as you mentioned, by creating a mock and working against that with the behavior hard coded as you have understood it.
Sometimes people refer to this type of testing as "contract based" testing, where you can write tests against the API based on the behavior you have observed and coded against, and when those tests start failing, the "contract is broken". If they are simple REST based tests using dummy data you can also provide them to the external provider to run so they can discover where/when they might be changing the API enough that it should be a new version or produce a warning about not being backwards compatible.
Ref: https://www.thoughtworks.com/radar/techniques/consumer-driven-contract-testing
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88"
}
|
Q: jQuery - hiding/collapsing a row I want table rows to disappear (by animating their height to 0px and opacity to 0). I'm using the following
$('#somecellelement').closest('tr').children('td').css('overflow','hidden');
$('#somecellelement').closest('tr').children('td').animate({
height: '0px',
opacity: 0
},500);
Where #somecellelement is something contained in cells of rows I want hidden. Opacity is animating correctly but the table row doesn't get smaller. If I set height: 500px then it works, but I need the row to disappear.
I cannot remove the element from DOM, though, due to scripts expecting values from form elements in those rows.
A: As Doozer Blake's answer says, you cannot apply jQuery animations to <td> and <tr> elements. I also don't like the idea of adding a <div> element to every cell in the table in advance, because in large tables it can hurt performance.
However, you can use jQuery's wrapInner function to dynamically wrap the contents of the <td>, only when you need to animate the row:
$('#somecellelement')
.closest('tr')
.children('td')
.wrapInner('<div class="td-slider" />')
.children(".td-slider")
.slideUp();
A word about padding
If your <td> elements have padding, they also need to be animated for a complete collapsing effect. This can be done easily with CSS3 transitions:
$('#somecellelement').closest('tr').addClass('collapsed');
And the CSS:
td {padding:10px; transition:padding 1s;}
.collapsed > td {padding:0;}
A: If you can apply a <div> within each <td> element, then you can animate them properly. jQuery does not apply height animations to <tr> and <td>. The height animations only work on elements with display: block set, I believe.
A small change:
$('#somecellelement').closest('tr').children('td').children('div').animate(
{height: '0px',
opacity: 0}, 500);
Full sample here:
http://jsfiddle.net/PvwfK/
A: you are animating the children, animate the <tr>
$('#somecellelement').closest('tr').animate({
height: '0px',
opacity: 0
},500);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Lucene document score appears to be lost after search In lucene 3.1 i have a large boolean query, that i execute like so:
IndexSearcher is = new IndexSearcher(myDir);
is.search(query, 10);
I get 10 results just fine, but they are sorted by docId, and contain no score information. All documentation i can find, says that lucene by default sorts by relevance/score but this is not the case for me. If I ask for explain, there is no score information, just "0.0". Funny thing is that if I execute the same query in Luke on the same index, i get a result sorted by score just fine, but I can't see how to get the scores to stay and be used for sorting when launched from app. So I believe the query is just fine, seeing how it works in Luke.
What am I doing wrong ? I have also tried setting is.setDefaultFielsSortScoring(true,true) but this makes no difference. I tried using TopScoreDocColletor with no success.
A: Look at Lucene scoring, particularly the query norm. If one of your weights is Float.MAX_VALUE everything else will be close enough to zero that it's smaller than machine precision.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: TextfieldShouldReturn breakpoint not being visited after return key hit In the .h
@interface WordListTableController : UITableViewController <UITextFieldDelegate>
In the .m
// Customize the appearance of table view cells.
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if (cell == nil) {
cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease];
}
// Configure the cell...
UITextField *FirstField = [[UITextField alloc] initWithFrame:CGRectMake(10, 10, 130, 25)];
FirstField.delegate = self;
FirstField.tag = indexPath.row;
[cell.contentView addSubview:FirstField];
FirstField.returnKeyType = UIReturnKeyNext;
[FirstField release];
return cell;
}
// Handle any actions, after the return/next/done button is pressed
- (BOOL)textfieldShouldReturn:(UITextField *)textfield
{
[textfield resignFirstResponder];
return YES;
}
What am I missing? Breakpoint is not being visited?
A: The way you are adding the UITextField is problematic. You add it each time cellForRowAtIndexPath is called without ever removing it. So you might actually end up with several text fields stacked on top of each other.
Try moving FirstField's creation in the if (cell == nil) {} block. Maybe this will also solve your problem.
A: you do not need repeat create it, just create it while the cell was created!
if (cell == nil) {
cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease];
// Configure the cell...
UITextField *FirstField = [[UITextField alloc] initWithFrame:CGRectMake(10, 10, 130, 25)];
FirstField.delegate = self;
FirstField.tag = indexPath.row;
[cell.contentView addSubview:FirstField];
FirstField.returnKeyType = UIReturnKeyNext;
[FirstField release];
}
A: Use this code snippet. textfieldShouldReturn is wrong.
- (BOOL)textFieldShouldReturn:(UITextField *)textField
{
[textField resignFirstResponder];
return YES;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to update WSDL_Imp.exe in Delphi 7 I have downloaded the updated version of WSDL_Imp.exe from http://cc.embarcadero.com/item/24535. I followed the intructions to update the existing WSDLImporter.exe. However, I see no effect. Strangely, if I remove WSDL_Imp.exe from the bin folder, the WSDL importer from the D7 still runs. Could someone tell how to update it correctly so that the updated version runs when I used the WSDLImporter wizar.
A: This is an update to the command line importer and not the one embedded in the IDE.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is the best way to send messages to Socket.IO clients from various back-ends My Setup:
I have an existing python script that is using Tweepy to access the Twitter Streaming API. I also have a website that shows aggregate real-time information from other sources from various back-ends.
My Ideal Scenario:
I want to publish real-time tweets as well as real-time updates of my other information to my connected users using Socket.IO.
It would be really nice if I could do something as simple as an HTTP POST (from any back-end) to broadcast information to all the connected clients.
My Problem:
The Socket.IO client implementation is super straight forward... i can handle that. But I can't figure out if the functionality I'm asking for already exists... and if not, what would be the best way to make it happen?
[UPDATE]
My Solution: I created a project called Pega.IO that does what I was looking for. Basically, it lets you use Socket.IO (0.8+) as usual, but you can use HTTP POST to send messages to connected users.
It uses the Express web server with a Redis back-end. Theoretically this should be pretty simple to scale -- I will continue contributing to this project going forward.
Pega.IO - github
To install on Ubuntu, you just run this command:
curl http://cloud.github.com/downloads/Gootch/pega.io/install.sh | sh
This will create a Pega.IO server that is listening on port 8888.
Once you are up and running, just:
HTTP POST http://your-server:8888/send
with data that looks like this:
channel=whatever&secretkey=mysecret&message=hello+everyone
That's all there is to it. HTTP POST from any back-end to your Pega.IO server.
A: The best way I've found for this sort of thing is using a message broker. Personally, I've used RabbitMQ for this, which seems to meet the requirements mentioned in your comment on the other answer (socket.io 0.7 and scalable). If you use RabbitMQ, I'd recommend the amqp module for node, available through npm, and the Pika module for Python.
An example connector for Python using pika. This example accepts a single json-serialized argument:
def amqp_transmit(message):
connection = pika.AsyncoreConnection(pika.ConnectionParameters(host=settings.AMQP_SETTINGS['host'],
port=settings.AMQP_SETTINGS['port'],
credentials=pika.PlainCredentials(settings.AMQP_SETTINGS['username'],
settings.AMQP_SETTINGS['pass'])))
channel = connection.channel()
channel.exchange_declare(exchange=exchange_name, type='fanout')
channel.queue_declare(queue=NODE_CHANNEL, auto_delete=True, durable=False, exclusive=False)
channel.basic_publish(exchange=exchange_name,
routing_key='',
body=message,
properties=pika.BasicProperties(
content_type='application/json'),
)
print ' [%s] Sent %r' %(exchange_name, message)
connection.close()
Very basic connection code on the node end might look like this:
var connection = amqp.createConnection(
{host: amqpHost,
port: amqpPort,
password: amqpPass});
function setupAmqpListeners() {
connection.addListener('ready', amqpReady)
connection.addListener('close', function() {
console.log('Uh oh! AMQP connection failed!');
});
connection.addListener('error', function(e) {throw e});
}
function amqpReady(){
console.log('Amqp Connection Ready');
var q, exc;
q = connection.queue(queueName,
{autoDelete: true, durable: false, exclusive: false},
function(){
console.log('Amqp Connection Established.');
console.log('Attempting to get an exchange named: '+exchangeName);
exc = connection.exchange(exchangeName,
{type: 'fanout', autoDelete: false},
function(exchange) {
console.log('Amqp Exchange Found. ['+exchange.name+']');
q.bind(exc, '#');
console.log('Amqp now totally ready.');
q.subscribe(routeAmqp);
}
);
}
);
}
routeAmqp = function(msg) {
console.log(msg);
doStuff(msg);
}
Edit: The example above uses a fan-out exchange that does not persist messages. Fan-out exchange is likely going to be your best option since scalability is a concern (ie: you are running more than one box running Node that clients can be connected to).
A: Why not write your Node app so that there are two parts:
*
*The Socket.IO portion, which communicates directly with the clients, and
*An HTTP API of some sort, which receives POST requests and then broadcasts appropriate messages with Socket.IO.
In this way, your application becomes a "bridge" between your non-Node apps and your users' browsers. The key here is to use Socket.IO for what it was made for--real time communication to browsers--and rely on other Node technologies for other parts of your application.
[Update]
I'm not on a development environment at the moment, so I can't get you a working example, but some pseudocode would look something like this:
http = require('http');
io = require('socket.io');
server = http.createServer(function(request, response) {
// Parse the HTTP request to get the data you want
io.sockets.emit("data", whatever); // broadcast the data to Socket.IO clients
});
server.listen(8080);
socket_server = io.listen(server);
With this, you'd have a web server on port 8080 that you can use to listen to web requests (you could use a framework such as Express or one of many others to parse the body of the POST request and extract the data you need).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Does WinRT/Metro support multiple monitors? I want to create an application that has multiple touch surfaces, preferably using the Metro/WinRT APIs. However, much of what I've read indicates that Metro is confined to a single (primary?) monitor. Is that true?
A: I have tested Win8 with multiple monitors. The metro interface is only ever available on a single monitor, with the othe always displaying the 'traditional' desktop. You can switch which monitor displays the metro UI, but cannot render it on both.
A: Microsoft heard this request and added multi-monitor support to Windows 8.1. See the Windows.UI.ViewManagement namespace, specifically the ProjectionManager and ApplicationViewSwitcher classes. There's also a Projection sample for this.
A: Metro style applications are full screen, single screen only. There is no way to have a dual-screen application.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Why does my fork/join deadlock? Consider the following snipped of code, which calculates the size of all paths given.
def pathSizes = []
paths.each { rootPath ->
pathSizes.addAll(
withPool { pool ->
runForkJoin(rootPath) { path ->
def headSizes = [:]
println path
def lines = ["ls", "-al", path].execute().text.readLines()
(0..<3).each { lines.remove(0) }
lines.each { line ->
def fields = line.split(/\s+/)
if (fields[0] =~ /^d/)
forkOffChild("$path/${fields.last()}")
else {
def userName = fields[2]
def fileSize = fields[4] as long
if (headSizes[userName] == null)
headSizes[userName] = fileSize
else
headSizes[userName] += fileSize
}
}
quietlyJoin()
System.gc()
def shallowSizes =
headSizes.collectEntries { userName, fileSize ->
def childResult =
childrenResults.sum {
it.shallowSizes[userName] ? it.shallowSizes[userName] : 0
} ?: 0
return [userName, fileSize + childResult]
}
def deepSizes =
childrenResults.sum { it.deepSizes ?: [] } +
shallowSizes.collect { userName, fileSize ->
[userName: userName, path: path, fileSize: fileSize]
}
return [shallowSizes: shallowSizes, deepSizes: deepSizes]
}.deepSizes
})
}
Why does this snippet of code deadlock? There are no interactions between threads except possibly with the system call and other parts of the Java framework. If the system calls are the problem, then how can I fix it, without removing the system calls (they are slow, hence the need to parallelize)?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: error: 'int main(int, char**)' previously defined here in C++ I'm implementing gtest now, and it gives me an error : main previously defined here.
Here's utest.cpp
// Bring in my package's API, which is what I'm testing
#include "../src/test.cpp"
// Bring in gtest
#include <gtest/gtest.h>
// Declare a test
TEST(TestSuite, testCase1)
{
EXPECT_EQ(5,getX(5));
}
// Run all the tests that were declared with TEST()
int main(int argc, char **argv){
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
And here's the code that i'm testing
test.cpp
#include "ros/ros.h"
#include "std_msgs/String.h"
#include <Project2Sample/R_ID.h>
#include <geometry_msgs/Twist.h>
#include <nav_msgs/Odometry.h>
#include <sensor_msgs/LaserScan.h>
#include <sstream>
#include "math.h"
int getX(int x)
{
return x;
}
int main(int argc, char **argv)
{
return 0;
}
There's nothing in test.cpp main but actual code will have some codes in main.
I dont have header files for utest and test cpp files
I tried
#ifndef UTEST_H
#define UTEST_H
and didn't solve the error.
A: The error message states what the problem is, you have two main() functions. I believe you want to remove the duplicate main() from test.cpp.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Generic form handler script? I've been given the task to turn a couple of forms for a charity into online forms to fill out (stuff to sign up for different programs in the charity). The forms I have been give all just have text input fields and I can make the html version.
I was wondering if someone knows of a PHP script that will handle input from the html forms, and chuck them in a text file or database (maybe not database because the fields may change). It doesn't have to be super feature complete, but the charity does not want to use a service such as wufoo.
If anyone has any suggestions it would be greatly appreciated!
A: You might want to check out FormMail.
http://www.tectite.com/formmailpage.php
It's a free script that allows you to save your HTML form submissions to a text file (CSV) or have it email the results. There's plenty of documentation/links on the site to help with configuring it.
A: As over-the-top as this is going to sound, drupal7 has nice form management right out of the box…er, zip file.
If you've already got PHP and MySQL, the setup time is measured in minutes for a basic install. There's also a simple way of creating custom forms and all sorts of configurations to allow you to use the data that's been collected.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is it possible to differentate between page refresh and close using the below method? I want to open a popup window on closing the page and not during refresh of the page,
So what i am thinking to do is.
Clent
Send ajax request to server every x seconds.
Server
var timeout=0
var sec=0
while (sec>timeout)
{
open popup window
sec++;
if(request){
sec=0
}
}
Assumption
Timeout will be the maximum delay between requests during page rfresh.
So my questions are:
1)Will this technique work?
2)If not Is there any other method to do it?
3)If It works what is optimal value for Timeout variable(in seconds)
4)How can i implement a small example in django is much appreciated.
A: Just think for a simple solution. Why are you going for the complex solution?
The simple way to get popup on page close is to just define the "window.onunload" event as shown below.
window.onunload = function(){
//Code to display Popup
}
Hope this helps you.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: emacs 23.2.1: character-by-character copying from the previous line seems broken After upgrading to emacs 23.2.1 with Fedora 15 one of my emacs functions is broken and I just can't seem to find the problem. Something has to have changed, but I sure can't find it in the elisp documentation.
I'd really appreciate any suggestions!
What it should do: Pressing Ctl-hyphen should copy the character above the cursor and move the point ahead by 1.
What it does: Keeps copying the character above without advancing the point. HOWEVER, calling it with "M-x insert-char-from-previous-line" or pressing RightArrow between Ctl-hypens works as it should. Adding (forward-char 1) to the end of the script doesn't help.
(defun insert-char-from-previous-line ()
"Copy previous line character by character"
(interactive)
(previous-line 1)
(let ((char-above (following-char) ))
(next-line 1)
(if (not (char-equal char-above ?\n ))
(insert char-above))))
(global-set-key [?\C--] 'insert-char-from-previous-line) ;copy char above
A: I think your getting caught in a goal column issue. You probably should not use next-line/previous-line for this, rather try this:
(defun insert-char-from-previous-line ()
"Copy previous line character by character"
(interactive)
(let ((curpoint (point))
(curcolumn (current-column)))
(forward-line -1)
(forward-char curcolumn)
(let ((char-above (following-char) ))
(goto-char curpoint)
(if (not (char-equal char-above ?\n ))
(insert char-above)))))
From the docs on pervious-line:
If you are thinking of using this in a Lisp program, consider using
`forward-line' with a negative argument instead. It is usually easier
to use and more reliable (no dependence on goal column, etc.).
(describe-function 'previous-line)
A: don't move the point for something like this:
(defun i-char (arg)
(interactive "*p")
(let ((start (+ (point-at-bol 0)
(current-column)))))
(insert (buffer-substring-no-properties start (+ start arg))))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Expanded List has same children for all parents Two issues here. Following a tutorial and I need a little help adapting it to my project.
1)The children are the same for each group. For example the Arraylist dining contain the children of the group "Dining Commons" I need to make two more arraylists containing academics buildings and residential buildings and those need to be the children of their respective parents.
2) Is it possible to make the children item clickable? using clicklistener or something like that.
java source.
package com.bogotobogo.android.smplexpandable;
import android.app.ExpandableListActivity;
import android.os.Bundle;
import android.widget.ExpandableListAdapter;
import android.widget.SimpleExpandableListAdapter;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Demonstrates expandable lists backed by a Simple Map-based adapter
*/
public class SmplExpandable extends ExpandableListActivity {
private static final String NAME = "NAME";
private static final String IS_EVEN = "IS_EVEN";
ArrayList buildings = BuildingList();
ArrayList diningCommonBuildings = DiningList();
private ExpandableListAdapter mAdapter;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
List<Map<String, String>> groupData = new ArrayList<Map<String, String>>();
List<List<Map<String, String>>> childData = new ArrayList<List<Map<String, String>>>();
for (int i = 0; i < buildings.size(); i++) {
Map<String, String> curGroupMap = new HashMap<String, String>();
groupData.add(curGroupMap);
curGroupMap.put(NAME, (String) buildings.get(i));
//curGroupMap.put(IS_EVEN, (i % 2 == 0) ? "This group is even" : "This group is odd");
List<Map<String, String>> dining = new ArrayList<Map<String, String>>();
for (int j = 0; j < diningCommonBuildings.size(); j++) {
Map<String, String> curChildMap = new HashMap<String, String>();
dining.add(curChildMap);
curChildMap.put(NAME, (String) diningCommonBuildings.get(j));
//curChildMap.put(IS_EVEN, (j % 2 == 0) ? "This child is even" : "This child is odd");
}
childData.add(dining);
}
// Set up our adapter
mAdapter = new SimpleExpandableListAdapter(
this,
groupData,
android.R.layout.simple_expandable_list_item_1,
new String[] { NAME, IS_EVEN },
new int[] { android.R.id.text1, android.R.id.text2 },
childData,
android.R.layout.simple_expandable_list_item_2,
new String[] { NAME, IS_EVEN },
new int[] { android.R.id.text1, android.R.id.text2 }
);
setListAdapter(mAdapter);
}
private ArrayList BuildingList() {
ArrayList buildings = new ArrayList();
buildings.add("Academic Buildings");
buildings.add("Dining Commons");
buildings.add("Residential Buildings");
return buildings;
}
private ArrayList DiningList() {
ArrayList dining = new ArrayList();
dining.add("Berkshire");
dining.add("Franklin");
dining.add("Hampden");
dining.add("Hampshire");
dining.add("Worcester");
return dining;
}
}
A: Check out this example,
public class MainExpand extends ExpandableListActivity {
private ExpandableListAdapter adapter;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
List<Map<String, String>> parent = new ArrayList<Map<String,String>>();
List<List<Map<String, String>>> childGroup = new ArrayList<List<Map<String,String>>>();
for (int i = 0; i < 5; i++) {
Map<String, String> parentMap = new HashMap<String, String>();
parentMap.put("lalit", "Parent "+i);
parent.add(parentMap);
List<Map<String, String>> child = new ArrayList<Map<String,String>>();
for (int j = 0; j < 2; j++) {
Map<String, String> childMap = new HashMap<String, String>();
childMap.put("lalit", "Child "+j);
child.add(childMap);
}
childGroup.add(child);
}
adapter = new SimpleExpandableListAdapter(
this,
parent,android.R.layout.simple_expandable_list_item_1,new String[] {"lalit"},new int[] { android.R.id.text1, android.R.id.text2},
childGroup,android.R.layout.simple_expandable_list_item_2,new String[]{"lalit","even"}, new int[]{android.R.id.text1,android.R.id.text2}
);
setListAdapter(adapter);
}
A: I used a combination of both of your answers. Thanks. I took advatange of the multidimensional array.
First i created these arrays.
private String[] buildingTypes = {
"Academic Buildings", "Dining Commons", "Residential Halls" };
private String[][] buildingList = {
{ "Agriculutural Engineering", "Army ROTC", "Arnold House", "Studio Arts Bldg","Bartlett" },
{ "Berkshire", "Franklin", "Hampden", "Hampshire", "Worcester" },
{ "Baker Hall", "Brett Hall", "Brooks Hall", "Brown Hall","Butterfield Hall" },
Then I modified the for loops
for (int i = 0; i < buildingTypes.length; i++) {
Map<String, String> parentMap = new HashMap<String, String>();
parentMap.put("lalit", buildingTypes[i]);
parent.add(parentMap);
List<Map<String, String>> child = new ArrayList<Map<String,String>>();
for (int j = 0; j < rows; j++) {
Map<String, String> childMap = new HashMap<String, String>();
childMap.put("lalit", buildingList[i][j]);
child.add(childMap);
}
childGroup.add(child);
}
A: Here are the answer to both of your questions. If something is not OK for you, just tell me in comments.
Answer of part 1):
Please look here: http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/view/ExpandableList1.html
private String[][] children = {
{ "Academic1", "Academic2", "Academic3", "Academic4" },
{ "Dining1", "Dining2", "Dining2", "Dining3" },
{ "Residential1", "Residential2" },
};
Answer of part 2):
http://developer.android.com/reference/android/widget/ExpandableListView.OnChildClickListener.html
onChildClick(ExpandableListView parent, View v, int groupPosition, int
childPosition, long id) Callback method to be invoked when a child in
this expandable list has been clicked.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to insert link in notification using C#? I am developing an application to watch file changes in folder and display notification to users.This function is OK.But I have a difficult to insert link(file directory) in notification.As this link is need to open watched folder.
Can anyone suggest how this could be implemented?
code:
watcher.Changed += new FileSystemEventHandler(OnChanged);
watcher.Created += new FileSystemEventHandler(OnChanged);
watcher.Deleted += new FileSystemEventHandler(OnChanged);
watcher.Renamed += new RenamedEventHandler(OnRenamed);
private void OnChanged(object source, FileSystemEventArgs e)
{
this.notifyIcon1.ShowBalloonTip(1, "File " + e.ChangeType, e.FullPath, ToolTipIcon.Info);
}
private void OnRenamed(object source, RenamedEventArgs e)
{
this.notifyIcon1.ShowBalloonTip(1, "File Renamed", e.OldFullPath + " renamed to " + e.FullPath, ToolTipIcon.Info);
}
I would like to display file directory of [e.FullPath] as link in notification(except e.Name).
eg. e.FullPath -> C:\TEMP\test.txt, e.Name -> test.txt
I want to display [C:\TEMP] as link.
Thanks all for suggesting.That difficult is OK now.If click notification, open watched folder.
My code:
this.notifyIcon1.BalloonTipClicked += new System.EventHandler(this.linkLabel_LinkClick);
private void linkLabel_LinkClick(object sender, EventArgs e)
{
System.Diagnostics.Process.Start(C:\TEMP\test.txt);
}
A: Assuming you want to let the user click a link in the notification that opens the path in Explorer, here's one way to do it.
*
*Add a LinkLabel to the notification window.
*Create a LinkLabel.Link object in the code behind that stores the desired path.
*Set up a handler for the LinkLabel's LinkClicked event and make a call to open Explorer to the path in the Link.
// step 2 -- implement where you have access to the desired path
linkLabel1.Links.Add(new LinkLabel.Link(0, 0, "C:\\"));
// step 3 -- open the path in Explorer
private void linkLabel1_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
System.Diagnostics.Process.Start(e.Link.LinkData.ToString());
}
A: If you are using MessageBox for notifications, then you need to implement your own form and put whatever you need on it, like LinkLabel as Jake suggested.
And the desired location can be opened by putting the following code into LinkClicked eventhandler (assuming the text of your link is the location you want to open):
System.Diagnostics.Process.Start(linkLabel1.Text);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to restore a text file? I want to restore the following data from the text file. The problem is only one string/line I can restore, I can't restore the rest of the data.
Here's the code :
public static String restore(String filename) throws IOException, ClassNotFoundException
{
FileInputStream fn = new FileInputStream(filename);
ObjectInputStream ob = new ObjectInputStream(fn);
String sample = (String) ob.readObject();
return sample;
}
A: It is hard to understand the meaning of this question, but if you just want to read lines from a .txt file and into an array, then these two methods might help.
You just need to call String[] textArray = readFromFile("yourfilename.txt");
This gives you an array with each line in the file as an element.
Scanner fScan(String filename) {
Scanner sc = null;
try {
sc = new Scanner(new File(fname));
} catch (FileNotFoundException e) {
System.out.println("File not found:" + fname + " " + e);
}
return sc;
}
String[] readFromFile (String fname) {
Scanner sc = fScan(fname);
int length = 0;
String lineCounter;
while (sc.hasNext()){
lineCounter = sc.nextLine();
length++;
}
String[] array = new String[length];
sc = fScan(fname);
for (int i = 0; i < length; i++) {
array[i] = sc.nextLine();
}
sc.close();
return array;
}
A: Your code does only read the first Element within your binary file.
public static void restore(String filename) throws IOException, ClassNotFoundException
{
FileInputStream fn = new FileInputStream(filename);
ObjectInputStream ob = new ObjectInputStream(fn);
String string1 = (String) ob.readObject();
String string2 = (String) ob.readObject();
}
Are you sure you did not overwrite your file while serializing it?
But as far as I understand your Question you don't want to serialize/deserialize a String-Object, rather than reading/writing a textfile.
If you just want to read/write a file, you are on the wrong way with the ObjectInputStream.
ake a look at:
http://download.oracle.com/javase/1.3/docs/api/java/io/BufferedReader.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Panels creation at runtime can't see them My problem is i am trying to make a panel. My button in in Main.mxml whereas the panel functions are defined in panel_Create.mxml. the code works fine. In panel_Create their are functions to create panels at runtime. The problem i am facing is when i run the program it wont show the panels but it does increase the value of n and after 8 clicks it gives alert message. Please tell me why cant i see panels. The code works fine when i put all the code in Main.mxml
<fx:Script>
<![CDATA[
import Components.panel_Create;
import mx.controls.Alert;
import spark.components.Button
public var adminPanel:panel_Create = new panel_Create();
public var n:Number = 0;
public function panel(event:MouseEvent): void
{
if ( n < 8)
{
adminPanel.panel_Create(n);
n++;
}
else
Alert.show('More Panels Not Allowed', 'Alert Box', mx.controls.Alert.OK);
}
]]>
</fx:Script>
<s:Button id="add" includeIn="State1" x="398" y="10" label="Add Panel" click="panel(event)"/>
<Components2:panel_Create includeIn="State1" x="10" y="66" width="737" height="599">
</Components2:panel_Create>
</s:Application>
A: I believe that the 8 panels are created and based on the code in one of your comments they are added as child elements to the adminPanel.
The problem is that your adminPanel is never added to the stage so is not visible.
A: Try this instead:
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:mx="library://ns.adobe.com/flex/mx">
<fx:Script>
<![CDATA[
private var panels:Array = [];
private function addPanel():void
{
if (panels.length < 8)
{
var panel:Panel = new Panel();
panel.title = "Panel "+(panels.length + 1);
panels.push(panel);
addElement(panel);
}else{
trace('More Panels Not Allowed');
}
}
]]>
</fx:Script>
<s:Button label="Add Panel" click="addPanel()"/>
</s:Application>
A: adminPanel is not being created. you have to do addElement(adminPanel) itself
if ( n < 8)
{
adminPanel.panel_Create(n);
addElement(adminPanel)
n++;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Update DB column per row in per second using timer in c# I have a MySql Db table.In db column there are some same type of data.I want to assign timer for different type of data.Here same type of data have same timer.For example 100,100,100 have timer1, 102,102,102 have timer2 etc.And I also want to update specific 2 column and this update will be continue in per row in per second using those assigned timer.Here is my code,
void PrepareTimers(List<int> _dataValues)
{
foreach (int dataValue in _dataValues)
{
ThreadingTimer timer = new ThreadingTimer(TimerAction, dataValue, 1000, 0);
}
}
void TimerAction(object flag)
{
string myconstring = "SERVER=localhost;" + "DATABASE=alicosms;" + "UID=root;" + "PASSWORD=;";
MySqlConnection mycon = new MySqlConnection(myconstring);
string u = "UPDATED";
mycon.Open();
MySqlCommand cmd = new MySqlCommand("UPDATE sms_data_bankasia set flag= @flag * 2 , sendingstatus = '" + u + "' WHERE flag = @flag", mycon);
MySqlParameter param = new MySqlParameter();
param.ParameterName = "@flag";
param.Value = flag;
cmd.Parameters.Add(param);
cmd.ExecuteNonQuery();
}
Its not working.What is the best way to do this?Any one can help me please?Any help can be great appriciated.
A: In order to update only single row at a time you need to add a LIMIT 1 clause to your sql query. It will places a limit on the number of rows that can be updated.
So your SqlCommand should be:
MySqlCommand cmd = new MySqlCommand("UPDATE sms_data_bankasia set flag= @flag * 2 , sendingstatus = '" + u + "' WHERE flag = @flag LIMIT 1", mycon);
A: I think I find out my problem.The problem was in timer declaration.I set period 0 and due time 1000.For this TimerAction never call again.And also Problem in query.And that the reason for not working what I want.It should be written like,
ThreadingTimer timer = new ThreadingTimer(TimerAction, dataValue, 0, 1000);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Live method overrride I'm pretty new to Java and I'm finishing my first project using it. Basically I read Head First Java and the API documentation for the classes I've used so far. That's my Java background.
This little piece of code rose a big little doubt on me, basically, what does this statement mean?
DataSource dataSource = new FileDataSource(tiffFile) {
public String getContentType() {
return "image/tiff";
}
};
Is it like a "live method override"? I still don't get what those brackets are doing there.
I'd really appreciate your help on this one.
Cheers.
A: What you've run across is an Anonymous Inner Class. There are many kinds of nested classes in Java and it would be beneficial for you to be familiar with all of them. I am including a link to a tutorial as a good starting point. Good luck!
Nested Classes in Java
A: It's called an Anonymous Inner Class. This creates a subclass of a FileDataSource with a call to the super contructor FileDataSource(tiffFile), in which the getContentType() method becomes overriden.
It can be rewritten as follows:
public static class TiffFileSource extends FileDataSource {
public TiffFileSource(File file){
super(file);
}
public String getContentType() {
return "image/tiff";
}
}
DataSource dataSource = new TiffFileSource(tiffFile);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Delay when issuing any rails commands on windows vista Everytime I run any rails command on my windows vista dev box there is always something like a 30 second delay before the command does anything. I don't have the firewall enabled and the virus protection is disabled. Any thoughts on what could be causing this?
Thanks
A: I can't comment on this specific issue, but I can say I had nothing but problems trying to run Rails on Windows in the past. That said, after trying Ubuntu and MacOs, I still prefer to use Windows as my primary UI. My solution is to run Ubuntu Server on a VM, and use a Samba file share to access dev files, with putty as my primary console interface. The linux command line is far more powerful, and is where the Rails ecosystem is really geared to be running.
VirtualBox: http://virtualbox.com
Ubuntu: http://ubuntu.com
Samba: http://www.samba.org/
Putty: http://www.chiark.greenend.org.uk/~sgtatham/putty/
With these tools, you can run your Rails stack in a nice linux server environment, and still enjoy the utility and functionality of the Windows GUI. (Although I'd recommend you move from Vista to 7)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Is there any better way to access twitter streaming api through python? I need to fetch twitter historical data for a given set of keywords. Twitter Search API returns tweets that are not more than 9 days old, so that will not do. I'm currently using Tweepy Library (http://code.google.com/p/tweepy/) to call Streaming API and it is working fine except the fact that it is too slow. For example, when I run a search for "$GOOG" sometimes it takes more than an hour between two results. There are definitely tweets containing that keyword but it isn't returning result fast enough.
What can be the problem? Is Streaming API slow or there is some problem in my method of accessing it? Is there any better way to get that data free of cost?
A: How far back do you need? To fetch historical data, you might want to keep the stream on indefinitely (the stream API allows for this) and store the stream locally, then retrieve historical data from your db.
I also use Tweepy for live Stream/Filtering and it works well. The latency is typically < 1s and Tweepy is able to handle large volume streams.
A: streaming API too fast you get message as soon as you post it, we use twitter4j. But streamer streams only current messages, so if you not listening on streamer the moment you send tweet then message is lost.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Haskell iterate over a list I know you suppose to think differently in Haskell, but can someone give me a quick answer on how to iterate over a list or nested list and print out a character based on the value of the list element.
list1 = [[1 0 0][0 1 0][0 0 1]]
By iterate through this nested list, it should print out x for 0 and y for 1
yxx
xyx
xxy
Thanks
A: Try this:
fun :: [[Int]] -> [String]
fun = (map . map) (\x -> if x == 0 then 'x' else 'y')
If you really need printing of result:
printSomeFancyList :: [[Int]] -> IO ()
printSomeFancyList = putStrLn . unlines . fun
A: define f by something like
f x = if x == 0 then 'x' else 'y'
then
map (map f) [[1,0,0],[0,1,0],[0,0,1]]
is what you want or if you want it fancier:
map' = map.map
map' f [[1,0,0],[0,1,0],[0,0,1]]
A: The solutions using map are the preferred Haskell style. But while you're learning, you may find explicit recursion easier to follow. Like so:
charSub :: Int -> Char
charSub 0 = 'x'
charSub 1 = 'y'
charSub x = error "Non-binary value!"
listSub :: [Int] -> String
listSub [] = []
listSub (x:xs) = (charSub x) : (listSub xs)
nestedSub :: [[Int]] -> String
nestedSub [] = []
nestedSub (y:ys) = (listSub y) ++ "\n" ++ (nestedSub ys)
map does pretty much the same thing--it applies a function to each element in a list. But it may be easier to see what's going on here.
A: First of all, I think you mean:
list1 :: [[Int]]
list1 = [[1,0,0],[0,1,0],[0,0,1]]
As for what you want:
valueOf :: Int -> Char
valueOf 0 = 'x'
valueOf 1 = 'y'
valueOf _ = 'z'
listValues :: [[Int]] -> [String]
listValues = map (map valueOf)
printValues :: [[Int]] -> IO ()
printValues = putStrLn . unlines . listValues
And then in ghci:
*Main> printValues list1
yxx
xyx
xxy
A: If you are interested in arbitrary nested lists, then you can write something like this (an arbitrary nested list is essentially a tree):
data Nested a = Leaf a | Nest [Nested a] deriving Show
traverse :: Nested Integer -> Nested Char
traverse (Leaf x) = Leaf (valueOf x)
traverse (Nest xs) = Nest (map traverse xs)
valueOf :: Integer -> Char
valueOf 0 = 'x'
valueOf 1 = 'y'
valueOf _ = 'z'
With that you can do:
Main> let nl = Nest [Leaf 1, Leaf 0, Nest [Leaf 0, Leaf 0, Leaf 1, Nest [Leaf 1, Leaf 1, Leaf 0]], Nest [Leaf 1, Leaf 1]]
Main> traverse nl
Nest [Leaf 'y',Leaf 'x',Nest [Leaf 'x',Leaf 'x',Leaf 'y',Nest [Leaf 'y',Leaf 'y',Leaf 'x']],Nest [Leaf 'y',Leaf 'y']]
The function traverse takes an arbitrary nested list of Integers and returns a corresponding nested list of Chars according to the valueOf rule
A: iterateList = foldl1 (>>).concat.intersperse [putStrLn ""].(map.map) (\c -> putStr $ if (c==0) then "X" else "Y")
A: The solutions
cambiar = putStr.unlines.(map (map f)) where f x = if x == 0 then 'x' else 'y'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: .NET Workflow Scheduling, best practices, and is it the right solution I'm working on a project where I need to schedule processes. I'm thinking WF would be great tool to use. But since this will be my first dive into WF I got a few questions.
Here is the project:
Based on time (every week, month, day, etc...) system needs to notify someone (email) that something needs to be done. Person needs to be able to tell system it's done and provide info. If that person does not complete task in a specified amount of time (2 days for example) escalation occurs (notify manager). That's the just.
This needs to be managable thru an .NET WPF application that I'm writing (this is an add on). Users pick time frames, who and when to notify. Also there will be multiple workflows occuring at same time.
Questions:
*
*Will and is WF a good fit?
*Since this a multi user app, should I create a service to run on a server and have app use remoting or other method to create, modify or delete WF?
*How do I schedule a workflow to fire on a specific date? Would I need to use something like http://quartznet.sourceforge.net/ (which looks fantastic)?
*Any good books on the subject that I should pick up?
Thanks in advance.
Rick
A: *
*From your description it sounds like WF4 will be a good fit.
*If the workflow structure is not subject to a lot of changes I would use a workflow service and if possible host it using IIS or if that is not possible use a Windows service and self host using the WorkflowServiceHost. If your workflow is subject to a lot of change I would create a Windows Service and use a WorkflowApplication to an each instance.
*I have never used Quartz.net but it looks interesting. You can also use a master workflow that triggers the other workflows that do the actual work.
*Pro WF: Windows Workflow in .NET 4.0 is the best book I am aware of.
A: This is certainly a good fit for Windows Workflow Foundation (WF4).
Do you plan on implementing other clients besides the WPF app? Web clients? Device clients?
What about management and monitoring of this solution?
Do you plan on hosting the solution in Windows Azure at some point?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Missing Template Arguments This is part of a homework assignment, but I've reduced the problem as far as possible. The code presented doesn't really do anything, just create some objects, so I am down to pure language issues.
I think my question is: Is it possible for a template class to create another class of the same template type?
The assignment is to have a main.cc as following:
#include "linkedlist.hh"
int main()
{
LinkedList<int> aList;
ListIterator elements = aList.iterator(); // if commented out, then no error
}
I have the following for linkedlist.hh:
template <typename T> class LinkedList;
template <typename T> class ListIterator;
template <typename T>
class LinkedList {
public:
ListIterator<T> iterator();
};
template <typename T>
ListIterator<T> LinkedList<T>::iterator() {
return new ListIterator<T>;
}
template <typename T>
class ListIterator {
};
Which give the following error with g++-4.6:
main.cc: In function ‘int main()’:
main.cc:6:18: error: missing template arguments before ‘elements’
main.cc:6:18: error: expected ‘;’ before ‘elements’
And a similar error with clang++-2.9:
main.cc:6:5: error: cannot refer to class template 'ListIterator' without a
template argument list
ListIterator elements = list.iterator();
^
In file included from main.cc:1:
./linkedlist.hh:16:11: note: template is declared here
class ListIterator {
UPDATE: Yes if I could change main.cc, then I could do the following, but I don't think that is what the assignment is.
main.cc
#include "linkedlist.hh"
int main()
{
LinkedList<int> aList;
ListIterator<int> elements = aList.iterator();
}
and linkedlist.hh
template <typename T> class LinkedList;
template <typename T> class ListIterator;
template <typename T>
class LinkedList {
public:
ListIterator<T> iterator();
};
template <typename T>
ListIterator<T> LinkedList<T>::iterator() {
ListIterator<T> anIterator;
return anIterator;
}
template <typename T>
class ListIterator {
};
A: Shouldn't that be ListIterator<int>?
A: ListIterator is a template, obviously you have to give it a template parameter. Try ListIterator<int>
A: This is how you could implement a generic iterator that could do some basic operations on the iterator for a list of any type of elements, but it would be unusual.
struct ListIteratorBase {
virtual void advance() = 0;
virtual bool atEnd() const = 0;
virtual ~ListIteratorBase() { }
};
template <typename T>
struct BasicListIterator : ListIteratorBase {
virtual void advance(); // implement this
virtual bool atEnd() const; // and this
T value() const; // and this
};
struct ListIterator {
template <typename T> ListIterator(BasicListIterator<T> &iter)
: iter_ptr(new BasicListIterator<T>(iter))
{
}
~ListIterator() { delete iter_ptr; }
void operator++() { iter_ptr->advance(); }
bool atEnd() const { return iter_ptr->atEnd(); }
// Method to get the value for a particular type. It is up to the caller to
// make sure the type is right at runtime.
template <typename T>
T get<>() const
{
BasicListIterator<T> *p = dynamic_cast<BasicListIterator<T> *>(iter_ptr);
assert(p);
return p->value();
}
~ListIterator() { delete iter_ptr; }
ListIteratorBase *iter_ptr;
};
You would then need
template <typename T>
BasicListIterator<T> LinkedList<T>::iterator() { ... }
A: For that purpose, in C++11, auto directive was added
see Type inference
A: Rename ListIterator and add typedef MyListIterator<int> ListIterator; to the end of the header. Then the main code will work unmodified.
This is just a trick to make this particular given code work. You can't make a standard library compliant iterator that is not a template or template-dependent. If you don't have to make a standard library compliant iterator, then you can employ a technique called type erasure (Vaughn Cato's answer), but that's a rather advanced topic.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Antipatterns of IoC container usage. Why IoC containers are so complex and used so "fancy" way? I'm seriously start thinking that usage of IoC container provokes to create overdesigned solutions (at least it provokes me to try to use various unnecessary features:).
It's the time to synchronize my "IoC" antipatterns list with community one's..
My short experience tell that it is absolutely enough to call Resolve method once per application at startup to resolve some infrastructure singletons and initiate with them "transient object's factory" that could produce new "smaller life time grain factories" . Even to make those factories thread safe (e.g. create one instance per thread) is so easy to achieve by adding 10 code lines into factory... Still those factories are much more simpler then "library's integration with IoC tool". Interception? Just create your own wrappers... Life time managers / dependency strategies/ parent containers? Call the Resolve only once at bootstrapper and you won't think about that.
Could you help me to understand why developers call Resolve several times on different application layers (by passing container or by passing delegate to container) and then have a lot of things to think about? I really worry that I miss something.
A: Some kind of IoC are anti-patterns or may be in some cases. For example the service locator antipattern. But if you are using constructor injection at the beginning of your application - and only there - then it should not lead to an anti-pattern.
Injecting a DI container interface in a class is a wrong use of constructor injection. If DI is not part of the business logic of your class it should not know or depend on DI container nor should it depend on IKitchen. It's only fine to inject your DI container in some kind of helper or service working in conjunction with your dependency injection container, because it's purpose is to work with or around DI container. The examples in the links you give are misuse of IoC. It does not mean that IoC in general is an anti-pattern.
I think the correct question would be "Can constructor injection be an anti-pattern?". So far I've never faced any situation or seen any example where it was so I would say "no", until I face such a situation.
A: When it was not clear to me how to use an IoC container, I decided to stop using it, because I thought was just an overcomplication over the simple dependency injection.
It is true though that even without IoC is possible to fall in the over-injection cases.
A while ago I read some posts from the author of ninject that opened my mind.
As you already know the injector should be used only inside the context root. However, in order to avoid over-injections, I decided to introduce an exception of the rule for injected factories.
In my framework, factories (and only factories) can use the injector container. Factories are binded in the container in the context root and therefore can be injected. Factories become valid dependencies and are used to create new objects inside other objects, using the injector container to facilitate dependencies injection.
A: Read This
Clearly something wrong. New library should not bring additional complex code.
A: I've found somebody who possibly could understand me :)
Constructor over-injection anti-pattern
A: Other antipattern in my eyes is pushing the initialization of container "deeper" then actual bootsrapper.
For example Unity and WCF recommendations
Bootstrapper in wcf app is the service constructor, then just put container initialization to constructor. I do not understand reasons to recommend to go for programming wcf sevice behaiviors and custome sevice host factory: if you want to have "IoC container free" bootstrapper - it is absurd, if you need to have "IoC container free" service contract implementation - just create second "IoC container free" service contract implementation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: JPA Nullable JoinColumn I have an entity:
public class Foo {
@Id
@GeneratedValue
private Long id;
private String username;
@ManyToOne(cascade = { CascadeType.MERGE }, fetch = FetchType.LAZY, optional = true)
@JoinColumn(name = "ParentID", nullable = true)
private Foo parent;
// other fields
}
With this kind of relationship, each Foo object has a parent, except the first Foo instance in the DB which has a NULL ParentID. When i try to create a query via criteria api and try to search for the first Foo object (null parent id) using any of it's properties (ID, username etc..), i get a:
javax.persistence.NoResultException: No entity found for query
How should the JoinColumn be implemented in this case? How should i get the first Foo object in the DB which has a null parent ID?
Thanks
Updated September 28 2011
What i am trying to achieve is to look lookfor all Foos with a username starting in "foo" and would like to use as separate object being returned then:
CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder;
CriteriaQuery<FooDto> criteriaQuery = criteriaBuilder.createQuery(FooDto.class);
Root<Foo> foo = criteriaQuery.from(Foo.class);
criteriaQuery.multiselect(foo.get("id"), foo.get("username"), foo.get("parent").get("username"), foo.get("property1")
//other selected properties);
Predicate predicate = criteriaBuilder.conjunction();
predicate = criteriaBuilder.and(predicate, criteriaBuilder.like(foo.get("username").as(String.class), criteriaBuilder.parameter(String.class, "username")));
// other criteria
criteriaQuery.where(predicate);
TypedQuery<FooDto> typedQuery = entityManager.createQuery(criteriaQuery);
typedQuery.setParameter("username", "foo%");
// other parameters
// return result
assuming that the usernames are foo1, foo2 and foo3 wherein foo1 has a null parent, the result set will only return foo2 and foo3 even if the foo1 has a username that starts with the specified predicate.
and also searching for foo1 alone will throw a
javax.persistence.NoResultException: No entity found for query
Is there a way to still include foo1 with the rest of the results? or foo1 will always be a special case since i have to add a criteria specifying that the parent is null? Or perhaps ai have missed an option in the joinColumn to do so.
Thanks
UPDATED Thu Sep 29 13:03:41 PHT 2011
@mikku
I have updated the criteria part of the post above (property1), because i think this is the part that is responsible for foo1 not to be included in the Result Set.
Here's a partial view of the generated query from the logs:
select
foo.id as id,
foo.username as username,
foo.password as password,
foo.ParentID as parentId,
foo_.username as parentUsername,
foo.SiteID as siteId,
from FOO_table foo, FOO_table foo_ cross join Sites site2_
where foo.ParentID=foo_.id and foo.SiteID=site2_.id and 1=1
and foo.username=? and site2_.remoteKey=? limit ?
and this wont return Foo1, obviously because username value is from foo_ which is why my original question is 'how should i get to Foo1'.
On the part that i have commented that ill use SelectCase and mix it up with your first reply, what i did is to add this part on the multiselect:
criteriaBuilder
.selectCase()
.when(criteriaBuilder.isNotNull(agent.get("parent")), agent.get("parent").get("username"))
.otherwise(criteriaBuilder.literal("")),
replacing the
foo.get("parent").get("username")
However this wont be able to get Foo1 as well. My Last resort though inefficient would probably check if the parameters are of Foo1 and create a criteriaQuery specifying a literal value for username, and using the default query otherwise.
Suggestions/alternatives are very much appreciated.
A: EDIT:
In your edited question nullable joincolumn is not problem - missing construction of FooDto, multiselect etc are. You can achieve your goal with following:
Querying:
/**
* Example data:
* FOO
* |id|username|property1|parentid|
* | 1| foo | someval| null|<= in result
* | 2| fooBoo | someval2| 1|<= in result
* | 3|somename| someval3| null|
*/
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<FooDto> c = cb.createQuery(FooDto.class);
Root<Foo> fooRoot = c.from(Foo.class);
Predicate predicate = cb.like(fooRoot.get("username").as(String.class),
cb.parameter(String.class, "usernameParam"));
c.select(
cb.construct(FooDto.class,
fooRoot.get("id"),
fooRoot.get("username"),
fooRoot.get("property1")))
.where(predicate);
TypedQuery<FooDto> fooQuery = em.createQuery(c);
fooQuery.setParameter("usernameParam", "foo%");
List<FooDto> results = fooQuery.getResultList();
Object to hold data:
public class FooDto {
private final long id;
private final String userName;
private final String property1;
//you need constructor that matches to type and order of arguments in cb.construct
public FooDto(long id, String userName, String property1) {
this.id = id;
this.userName = userName;
this.property1 = property1;
}
public long getId() { return id; }
public String getUserName() { return userName; }
public String getProperty1() { return property1; }
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: java gzip can't keep original file's extension name I'm using GZIPOutputStream to gzip one xml file to gz file, but after zipping I find the extension name of the xml file (.xml) is missing in the gz file hierarchy. I need to keep the extension name because the zipped gz file will be used by third party system which expects getting a .xml file after unzipping gz file. Are there any solutions for this? My test code is:
public static void main(String[] args) {
compress("D://test.xml", "D://test.gz");
}
private static boolean compress(String inputFileName, String targetFileName){
boolean compressResult=true;
int BUFFER = 1024*4;
byte[] B_ARRAY = new byte[BUFFER];
FileInputStream fins=null;
FileOutputStream fout=null;
GZIPOutputStream zout=null;
try{
File srcFile=new File(inputFileName);
fins=new FileInputStream (srcFile);
File tatgetFile=new File(targetFileName);
fout = new FileOutputStream(tatgetFile);
zout = new GZIPOutputStream(fout);
int number = 0;
while((number = fins.read(B_ARRAY, 0, BUFFER)) != -1){
zout.write(B_ARRAY, 0, number);
}
}catch(Exception e){
e.printStackTrace();
compressResult=false;
}finally{
try {
zout.close();
fout.close();
fins.close();
} catch (IOException e) {
e.printStackTrace();
compressResult=false;
}
}
return compressResult;
}
A: Maybe I'm missing something, but when I've gzipped files in the past, say test.xml, the output I get would be test.xml.gz. Perhaps if you changed the output filename to test.xml.tz you would still preserve your original file extension.
A: Not sure what the problem is here, you are calling your own compress function
private static boolean compress(String inputFileName, String targetFileName)
with the following arguments
compress("D://test.xml", "D://test.gz");
Quite obviously you are going to lose the .xml portion of the filename, you never pass it into your method.
A: Your code is perfectly fine. give the output file names as "D://test.xml.gz" you missed the file extension(.xml).
Ex: compress("D://test.xml", "D://test.xml.gz");
A: You can also use an ArchiveOutput stream (like Tar) before GZipping it.
A: Use the ZipOutputStream with ZipEntry instead of GZipOutputStream. so that it will keep the original file extension.
Sample code as below..
ZipOutputStream zipOutStream = new ZipOutputStream(new FileOutputStream(zipFile));
FileInputStream inStream = new FileInputStream(file); // Stream to read file
ZipEntry entry = new ZipEntry(file.getPath()); // Make a ZipEntry
zipOutStream.putNextEntry(entry); // Store entry
A: I created a copy of GZIPOutputStream and changed the code to allow for a different filename "in the gzip":
private final byte[] header = {
(byte) GZIP_MAGIC, // Magic number (short)
(byte)(GZIP_MAGIC >> 8), // Magic number (short)
Deflater.DEFLATED, // Compression method (CM)
8, // Flags (FLG)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Extra flags (XFLG)
0 // Operating system (OS)
};
private void writeHeader() throws IOException {
out.write(header);
out.write("myinternalfilename".getBytes());
out.write(new byte[] {0});
}
Info about gzip format: http://www.gzip.org/zlib/rfc-gzip.html#specification
A: I also had the same issue, I found that (apache) commons-compress has a similar class - GzipCompressorOutputStream that can be configured with parameters.
final File compressedFile = new File("test-outer.xml.gz");
final GzipParameters gzipParameters = new GzipParameters();
gzipParameters.setFilename("test-inner.xml");
final GzipCompressorOutputStream gzipOutputStream = new GzipCompressorOutputStream(new FileOutputStream(compressedFile), gzipParameters);
Dependency:
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-compress</artifactId>
<version>1.8</version>
</dependency>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Any tool to automatically fix simple JSLint issues? I've run JSLint for the first time on a rather lengthy file, and I have a lot of errors like expected exactly 1 space between "function" and "(" or unexpected ' '. I didn't realize this was important at all anywhere I learned about javascript and now fixing each one of these rather simple things by hand seems frustrating. Some I can figure out with simple find and replaces, but I wondered if there's any tools online that will automatically make these changes for me since they seem to be pretty straightforward?
(I have /*jslint white: false */ in my file, I develop in Netbeans and auto-format (except then I have to correct hanging jQuery chainings because it doesn't do it right), and my code still ends up with a huge number of things that jslint complains about as far as unexpected numbers of spaces.)
A: http://jsbeautifier.org/ should fix all your problems
A: jsfmt formats javascript and allows AST searching and rewriting. Analogous to gofmt.
In some ide's like netbeans you can automatically format code with alt+shift+f.
There are also online ones. http://jsbeautifier.org/
A: Simply use IDE which supports custom code formatting. Like NetBeans, WebStorm or Visual Studio.
A: There's a bunch of tools around for doing things like this. I use JS Beautifier which will at least fix indentation errors and also the spaces-around-functions-part (I've tested it, yay!)
A: If you use/have Visual Studio it does formatting of JavaScript too. You may need to configure formatting options from defaults.
A: While it checks for different things than JSLint, the fixjsstyle mode of the Google closure linter may do what you want.
It automatically fixes code to (more closely) fit with the Google Javascript style guide which is well worth a read.
As others have pointed out, the Javascript beautifier is the way to go for spacing issues.
A: There's an npm module called fixmyjs.
In "legacy mode" with JSHint:
var jshint = require('jshint').JSHINT
var fixmyjs = require('fixmyjs')
jshint(stringOfCode, objectOfOptions)
var stringFixedCode = fixmyjs(jshint.data(), stringOfCode, objectOfOptions).run()
Works great!
There is also a Sublime Text 2/3 Package.
A: Damon, Prettier is probably going to do everything you want wrt painless javascript code formatting. It will convert your code to an AST and then pretty print it back into your file so it auto-formats as you go. You can even add it as a precommit hook or run it on a folder full of files (pretty quickly, too!) so that your entire codebase will be immediately pretty.
Here is a video from ReactConf that explains it pretty well
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: jQuery form throws "missing ) after argument list" error if optional inputs are not filled in I have a form that submits to a jQuery plugin I've written and used on several projects, and I'm trying to expand its functionality. I've never had any issues with it until now, when I decided to add optional fields to my form. All input text fields on my form are pre-populated via another jQuery function with brief instructions, which disappear when the field receives focus and the user begins typing.
The problem is, the optional inputs post their pre-populated values if the user does not enter his own, and in this case my jQuery plugin returns a "missing ) after argument list" error. The error does not appear if the user fills in all fields.
Here is the code I'm using:
$(document).ready(function(){
$("#contact-submit").click(function(){
var valid = '';
var isr = ' is required.';
var fname = $("#cfname").val();
var company = $("#ccompany").val();
var email = $("#cemail").val();
var phone = $("#cphone").val();
var location = $("#clocation").val();
var website = $("#cwebsite").val();
var design = $("#cdesign:checked").val();
var security = $("#csecurity:checked").val();
var social = $("#csocial:checked").val();
var seo = $("#cseo:checked").val();
var video = $("#cvideo:checked").val();
var presence = $("#cpresence:checked").val();
var customers = $("#ccustomers:checked").val();
var showcase = $("#cshowcase:checked").val();
var campaign = $("#ccampaign:checked").val();
var ecommerce = $("#cecommerce:checked").val();
var digital = $("#cdigital:checked").val();
var sec = $("#csec:checked").val();
var vid = $("#cvid:checked").val();
var gseo = $("#cgseo:checked").val();
var other = $("#cother:checked").val();
var budget = $("#cbudget").val();
var time = $("#ctime").val();
var os = $("#cos").val();
var comment = $("#ccomment").val();
if (!email.match(/^([a-z0-9._-]+@[a-z0-9._-]+\.[a-z]{2,4}$)/i)) {
valid += '<br />A valid Email'+isr;
}
if (fname.length<1) {
valid += '<br />Your name'+isr;
}
if (phone.length<13) {
valid += '<br />A 10-digit phone number of the format (xxx)xxx-xxxx'+isr;
}
if (phone.length>13) {
valid += '<br />A 10-digit phone number of the format (xxx)xxx-xxxx'+isr;
}
if (valid!='') {
$("#contact-message").fadeIn("slow");
$("#contact-message").html("Error:"+valid);
}
else {
var datastr ='fname=' + fname + '&company=' + company + '&email=' + email + '&phone=' + phone + '&location=' + location + '&website=' + website + '&design=' + design + '&security=' + security + '&social=' + social + '&seo=' + seo + '&video=' + video + '&presence=' + presence + '&customers=' + customers + '&showcase=' + showcase + '&campaign=' + campaign + '&ecommerce=' + ecommerce + '&digital=' + digital + '&sec=' + sec + '&vid=' + vid + '&gseo=' + gseo + '&other=' + other + '&budget=' + budget + '&time=' + time + '&os=' + os + '&comment=' + comment;
$("#contact-message").css("display", "block");
$("#contact-message").html("Submitting...");
$("#contact-message").fadeIn("slow");
setTimeout("send('"+datastr+"')",2000);
}
return false;
});
});
function send(datastr){
$.ajax({
type: "POST",
url: "plugins/hm_custom/mail-consult.php",
data: datastr,
cache: false,
success: function(html){
$("#contact-message").fadeIn("slow");
$("#contact-message").html(html);
setTimeout('$("#contact-message").fadeOut("slow")',2000);
}
});
}
What makes it even more confusing, is that if I reduce the number of inputs back to my original six text fields, no errors are thrown, even if I fail to insert text into the optional fields. I'm sure that my code is much more complex than it has to be, it's a work in progress and I'm slowly learning how to condense it bit by bit. I have to think my problem must be a syntax error that is being tripped conditionally, but I am very new to Javascript and jQuery, so I have no clue.
Any ideas?
A: Your issue, i think, is this line:
setTimeout("send('"+datastr+"')",2000);
You are building a string that defines the function name, and by adding in the dataStr, you are building an invalid name.
If you use a closure instead of a hardcoded string, this should alleviate the issue:
setTimeout(function() { send(dataStr); },2000);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is PhoneGap secure? How much has phonegap been hardened against attackers? Especially XSS flaws in our own pages, where the PhoneGap API is exposed to an unknown attacker.
For example, is the PhoneGap.exec() command secure on the iPhone?
From JavaScript, the PhoneGap.exec command worries me e.g. PhoneGap.exec(successCB, errorCB, "com.phonegap.contacts","search", [{"fields":fields, "findOptions":options}]); (JavaScript for iPhone target copied from here). The exec command should in theory be restricted, and should only be able to access PhoneGap API classes (com.phonegap.contacts in this example) and methods (e.g. search in this example).
If there is an XSS vunerability in our app then any attacker has an expanded attack surface compared with running in just the browser sandbox. The end user's phone is exposed to to any vulnerabilities in PhoneGap that could allow an attacker to gain access to privileged ObjectiveC code/api's. The only documentation I could find on PhoneGap security was this.
A: You can control API access by modifying PhoneGap.plist/Plugins and removing any un-needed ones.
With PhoneGap 1.1 (coming soon) - there is a white-list feature (in PhoneGap.plist/ExternalHosts) where only certain external urls can be connected to - either in JavaScript or Objective-C.
A: This talks about a Cordova/PhoneGap security issues:
http://packetstormsecurity.com/files/124954/apachecordovaphonegap-bypass.txt
"The following email was sent to Apache Cordova/PhoneGap on 12/13/2013, and again on 1/17/2014.
As there has been no response, we are re-posting it here to alert the general public
of the inherent vulnerabilities in Apache Cordova/PhoneGap." would also concern me if it is true.
On Android if PhoneGap uses addJavascriptInterface() for the bridge, then that has serious security implications:
http://www.droidsec.org/news/2014/02/26/on-the-webview-addjsif-saga.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: General knowledge question regarding classes and class instances in C++ I'll start with my problem:
My problem is that I'm getting a segmentation fault when I try to access a public function within an instance within a class.
Suppose I have a class A that has a bunch of class instances in it like so:
class A {
...
public:
class B *B;
class C *C;
};
Let's consider class A to be our "global" class; that is, class A is sent to every class that I initialize and own. Therefore, every class function can call functions from class A through A->function(). Furthermore, assuming other instances have been initialized, every class can call functions of instances belonging to A, like so: A->B->function(). Up to now, everything worked great. My problem is that I'm suddenly getting a segfault when trying to access a function of one of the instances belonging to A. What I think the reason is (and why I am asking this question, because I am unsure), is that the class instance A is sent to class C before instance B has been initialized. Then in class C, I simply create a pointer copy (meaning my class C has a private instance of A called class A *A). So then when I first create an instance of class C, I send it class A and all of it's public members, which C's constructor then makes the local copy of A.
Phew. That was difficult to do without presenting code. So here's the question; say one of my class A functions contains:
C *c = new C(this);
and the constructor for C has this:
C(A *a_val) { a = a_val; }
while also containing a private instance-pointer class A *a;
then another function in A instantiates the B class. Would my C class be able to use that B class? And if not (which is why I think I'm segfaulting), how could I possibly solve this issue?
Thanks!
A: In general your code will segfault if you dereference a pointer to an unitialized variable. From your description, this sounds like what is happening, although it's hard to tell given the lack of an example that fails.
another function in A instantiates the B class. Would my C class be
able to use that B class?
Yes. This should work provided you instantianted correctly and passed the pointer to B correctly to C. Also note that it would help to get the terminology a bit more correct in the above question:
another function in A instantiates an object of the the B class. Would my C object be
able to use that B object?
I believe the above is what your question really meant to say.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Stack not returning empty even though it has no objects This is seriously confusing and frustrating me. I already asked one question regarding this same program here. Going off of that code, I'm having yet another problem with Stack. Here is a method that is using the array of stacks, called blocks, from the previous post:
static void pileOnto(int sBlock, int rBlock)
{
boolean flag = true;
while ((!blocks[rBlock].empty()) && (flag))
{
retainer.push((Integer)blocks[sBlock].pop());
if (((Integer)blocks[rBlock].peek()).intValue() == sBlock) {flag = false;}
}
while (((Integer)blocks[rBlock].peek()).intValue() != rBlock)
{
returnBlock(((Integer)blocks[rBlock].pop()).intValue());
}
}
The first while loop should either end when the stack is empty, or when the last Integer it popped was the same as sBlock. The problem that I'm having is that blocks[rBlock].empty() never returns true, even when the program crashes from trying to pop off of blocks[rBlock], meaning that there can't be anything in the stack. Can someone please explain to me what is going on?
A: In the first loop you are testing one stack (i.e. blocks[rBlock].empty()) and popping a different stack (i.e. blocks[sBlock].pop()).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: tell java to accept a self-signed certificate (to some extent a repeated question) Alright, so when I attempt to access the file manager through webmin it fails and throws an error to the tune of:
failed to get language list : javax.net.ssl.SSLHandshakeException : java.sucurity.cert.CertificateException: Java couldn't trust server
So, I read a little bit and found out (I think) that I need to configure java to accept the self-signed certificate. Now,
telling java to accept self-signed ssl certificate
This really didn't clear anything up because I'm really not sure that we have the same problem, or if both our problems have the same solution? So, I was wondering if someone could point me in the right direction with some articles, possibly tell me what java I need to be configuring (I'm assuming that it is the VMware java?)
Thoughts?
A: javax.net.ssl.SSLHandshakeException : java.sucurity.cert.CertificateException: Java couldn't trust server
That's not a Java message. You would need to post the actual message of the exception, not something made up by the application. (Never do that.)
To answer the original question, all you have to do is import the certificate into the client's truststore.
See here for a keytool-less way of doing it, with thanks to Andreas Sterbenz.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Cannot monitor word docs using FileSystemWatcher I have a folder that contains multiple word documents. I need to monitor this folder for any changes in these word documents. I am facing the following problems:
*
*FileSystemWatcher never reports the exact name of file being changed. For example for file abc.doc, it reports "~$abc.doc is changed" on first save.
*For all subsequent saves to that file, OnChanged event in the following code is not called. When I changed the filter to watcher.Filter = "*.*", I found that for subsequent saves, it reports "~WRL0001.tmp is changed".
So the bottom line is that I never know the exact name of the file changed.
I am using the following code
public static void Main()
{
FileSystemWatcher watcher = new FileSystemWatcher();
watcher.Path = @"C:\Users\Administrator\Documents\"; //"
watcher.NotifyFilter = NotifyFilters.Size;
watcher.Filter = "*.doc";
watcher.Changed += new FileSystemEventHandler(OnChanged);
watcher.EnableRaisingEvents = true;
Console.WriteLine("Press \'q\' to quit the sample.");
while (Console.Read() != 'q') ;
}
private static void OnChanged(object source, FileSystemEventArgs e)
{
Console.WriteLine("File: " + e.FullPath + " " + e.ChangeType);
}
A:
File system watcher never reports the exact name of file being
changed. For example for file abc.doc, it reports "~$abc.doc is
changed" on first save.
The reason for this is that Word creates several temp files in the current directory where the original file is opened and the FileChanged event is fired when a new file is created also. In fact, FileSystemWatcher fires FileCreated followed by a FileChanged event. Since you don't subscribe to FileCreated you are only seeing the FileChanged notification.
For all subsequent saves to that file, OnChanged event in the
following code is not called. When I changed the filter to
watcher.Filter = ".", I found that for subsequent saves, it reports
"~WRL0001.tmp is changed".
Same as above.
But I was curious about your problem and I did a little change to your program and modified it as follows (posting only relevant lines):
watcher.NotifyFilter = NotifyFilters.Attributes;
watcher.Filter = "*.doc";
watcher.Changed += new FileSystemEventHandler(OnChanged);
watcher.EnableRaisingEvents = true;
And then I saw the actual name of the file being changed printed on the console when the file was saved. When I looked at what attributes had changed in the original document from one save to the next one, I noticed that the revision number was being incremented by 1 (I know, revision number is not a file attribute from the OS point of view). I'm sure other attributes -for lack of a better word- got changed. It's up to you if you want to set the NotificationFilter to NotifyFilters.Attributes; to make this work but it is definitely odd that it wouldn't work by having NotificationFilter =NotifyFilters.Size | NotifyFilters.LastWrite; for example.
A: Saving to random file name is routinely done to keep file change operation atomic and avoid destroying user's data on write failures.
Usual sequence:
*
*Open file "myfile.ext"
*Edit
*Write change file to "some_temp_name" in the same folder
*If write succeeded rename "myfile.ext" to "myfile.old", rename "some_temp_name" to "myfile.ext".
Depending on program the sequence could be different - i.e. last renames could be file copy, or delete of original. You have to watch for all types of changes in the destination folder and observe final updates to files you are interested (which could be create notifications instead of changes).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Google Places Autocomplete in ASP.NET Any one have wrapper in ASP.NET for Google Places Autocomplete API?
http://code.google.com/apis/maps/documentation/places/autocomplete.html
Samples or approach is highly appreciated.
Thanks
A: Here is the link of demo... for google places autocomplete...
http://code.google.com/apis/maps/documentation/javascript/examples/places-autocomplete.html
right click in browser and view page source... you can find approach....how to do it....
related Document.. find from here...
http://code.google.com/p/geo-autocomplete/
A: In Google's API v3, they show an example that includes this snipped:
function fillInAddress() {
// Get the place details from the autocomplete object.
var place = autocomplete.getPlace();
for (var component in componentForm) {
document.getElementById(component).value = '';
document.getElementById(component).disabled = false;
}
// Get each component of the address from the place details
// and fill the corresponding field on the form.
for (var i = 0; i < place.address_components.length; i++) {
var addressType = place.address_components[i].types[0];
if (componentForm[addressType]) {
var val = place.address_components[i][componentForm[addressType]];
document.getElementById(addressType).value = val;
}
}
}
However, in ASP.NET one needs to use the format document.getElementById("<%= controlname.ClientID %>")
How to create the syntax in the fillInAddress() above to pass the 'component' object and create the correct syntax to make this work? Thanks
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: An If statement within an If statement I'm checking to see if a check box has been checked on a search form. If that box has been checked, the value it outputs will be "No".
So if the value is "No", then I want to use an If statement to echo some PHP. The problem is, the PHP that I want to echo is an actual If statement itself.
Here's my code right now:
$showoutofstock = $_SESSION['showoutofstock'];
if ( $showoutofstock == "No" ) {
}
if($_product->isSaleable()): {
}
A: You can't really "echo some PHP". PHP is processed on the server-side, and the result is usually HTML that the browser client reads and displays.
It's not clear what you're actually trying to accomplish. Can you clarify a bit?
This might help though -- it's simply showing how to nest if statements, which may be all that you're asking for:
<?php
$showoutofstock = $_SESSION['showoutofstock'];
if ( $showoutofstock == "No" ) {
if($_product->isSaleable()) {
}
}
?>
A: you are closing your if before writing another if inside it
<?php
$showoutofstock = $_SESSION['showoutofstock'];
if ( $showoutofstock == "No" ) { //main if clause start
if($_product->isSaleable()) {//if clause inside main if start
} //inner if clause ends here
}//outer if clause ends here
?>
try it like this and yes you can check for the value of check box and can code accordingly inside that if clause
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: jQuery FadeOut not working function fadeInSubheader() {
$('#sub1').fadeIn().delay(1000).queue(function() {
$('#sub2').fadeIn().delay(1000).queue(function() {
$('#sub3').fadeIn().delay(5000).queue(function() {
fadeOutSubheader();
});
});
});
}
function fadeOutSubheader() {
console.log('fading out');
$('#sub1').fadeOut(function() {
$('#sub2').fadeOut(function() {
$('#sub3').fadeOut(function() {
fadeInSubheader();
});
});
});
}
Its supposed to loop once started. But it will start and the fadeOutSubheader function is called because the console log shows 'fading out' like its supposed to but they do not fade out. Any ideas?
PS. The fade out is supposed to happen altogether preferably.
A: According to the jQery doc for .queue(), when you use .queue(fn), you have to .dequeue() in the function to keep things going properly. You can see it work here: http://jsfiddle.net/jfriend00/Py2hL/.
function fadeInSubheader() {
$('#sub1').fadeIn().delay(1000).queue(function() {
$(this).dequeue();
$('#sub2').fadeIn().delay(1000).queue(function() {
$(this).dequeue();
$('#sub3').fadeIn().delay(5000).queue(function() {
$(this).dequeue();
fadeOutSubheader();
});
});
});
}
function fadeOutSubheader() {
console.log('fading out');
$('#sub1').fadeOut(function() {
$('#sub2').fadeOut(function() {
$('#sub3').fadeOut(function() {
fadeInSubheader();
});
});
});
}
If you really want the fadeOuts to all go together, then replace the fadeOutSubheader() with this to just run them all at once:
function fadeOutSubheader() {
console.log('fading out');
$('#sub1, #sub2').fadeOut();
$('#sub3').fadeOut(fadeInSubheader);
}
This is implemented here: http://jsfiddle.net/jfriend00/BYGpa/
A: *
*You're not using .dequeue() which is causing problems with your looping. According to the docs, "dequeue basically removes and executes the next function in the queue, letting the sequence continue."
*To fadeOut all 3 items at once, like you write you want, simply use: $('#sub1,#sub2,#sub3').fadeOut() then to use a call back once for all 3 of those fades use: .promise().done(fadeInSubheader) (see my example below)
Working example
$(function() { // on ready
// Define functions as local variables
var fadeInSubheader = function() {
$('#sub1').fadeIn().delay(1000).queue(function() {
$('#sub2').fadeIn().delay(1000).queue(function() {
$('#sub3').fadeIn().delay(5000).queue(function() {
fadeOutSubheader();
$(this).dequeue();
});
$(this).dequeue();
});
$(this).dequeue();
});
},
fadeOutSubheader = function() {
console.log(++i);
// since jQuery 1.6 you can use promise / done so that
// the callback only happeens once - default is once for each element
$('#sub1,#sub2,#sub3').fadeOut(1000).promise().done(fadeInSubheader);
}, i=0;
// Let's start the loop!
fadeInSubheader();
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Are there any differences if I'm retrieving data in a DataSet instead of a DataTable
Possible Duplicate:
Datatable vs Dataset
I want to know the difference in terms of memory, efficiency if I'm using DataSet instead of DataTable to fill data.
I want to explain this to a third party so strong reason needed.
A: DataSet is an in memory representation of database which contains a collection DataTable and DataRelation object is used to relates these tables.
On the other hand, DataTable represents an in memory data cache for any "single" table of database.
So if you are dealing with only single table then its better to use DataTable instead of DataSet
A: DataSet can hold multiple DataTables and datasets maintain some internal indexes to improve performs for things like finds and selects.
Some interesting discussions
How can I improve the loading of my datasets
Datatable vs Dataset
A: It really depends on the sort of data you're bringing back. Since a DataSet is (in effect) just a collection of DataTable objects, you can return multiple distinct sets of data into a single, and therefore more manageable, object.
Performance wise dataset transferring takes more time than datatable since dataset is a collection of datatable
see this for more details
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to run j2me game to all screen resolution like andengine I am developing a game in j2me and android . In android using andengine to develop the game and its support all screen resolution. But in j2me, need to workout for porting. Its any tool like andeninge for j2me . Any tools for j2me ?
A: AndEngine gets scaling for free because it's using OpenGL. If you want scaling to work for J2ME like it does for AndEngine, you'll have to use OpenGL also. Otherwise, you'll probably need to calculate the scale yourself. Since it's not too complicated, I don't think there are any libraries made specifically for this problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: When a SqlConnection is kept open, does it update? In my ASP.NET project, I create/open a SqlConnection when necessary, and close it at Application_EndRequest
Nevermind that, I am just wondering what would happen if, while a connection (say Connection A) is opened, the database is updated from another request (say Connection B).
So it's like this
(Say initially x is 1)
A.Open()
B.Open()
B.UpdateX() --> SET x=2
B.Close()
A.SelectX() --> would this return 1 or 2?
A.Close()
A: A SQL connection never updates - it is nothing more than a pipe to the server. This is saying like your car updates when you move it somewhere else.
The transaction server side of the session the connection is atached to (note: you can have multiple connections to the same transaction - though I am sure most people dont know that) (which is either implicit or explicit) shows data according to it's configured isolation level. Sometimes you want tone thing, sometimes the other. Sit down and design.
Now, one item with keeping the connection open is that it does not properly reset between pages which may lead to all kinds of stupid issues down the road. Pretty much antipatten. We had an issue here recently in a project with Oracle where the server disconnected clients after 2 hours without data request.-... and the connection died not "close" (or show closed) until the next sql was send. Leads to funny errors down the line that you dont want - use connectin pooling to offset the peformance overhead.
A: The data returned depends on the state of the DB at the time you run the query and not at the time when you opened the connection.
Let's say you don't use any specific transaction management in your code and let's assume that the update statement is one distinct row or table update. In this case connection a will see X=2. this is because both connections will use the default transaction level which is read committed for both connections.
Now, in your example there is no way to make connection A read X=1 if it is a single value update on a single row. But if you use a transaction on connection B and don't commit and leave the connection open. The query for connection A will block until its timeout would expire. Basically, X would not be accessible until B was done.
Also, If you are updating 10 million rows in one transaction of connection B and connection A was in a different thread and transaction. There is a possibility for connection A to read some old/stale/invalid data by using the transaction isolation level of "read uncommitted".
Hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Anyone can give me a summary of "single quote mark" usage in Ada? I've just read "Ada Programming" but I'm a bit confused about how to use ' (single quote mark) in Ada.
I can understand that ' is used for reference attribute. AAA'Image(..), BBB'Value(..)
However, considering this piece of code:
type Plain_Vector (Capacity : Capacity_Subtype) is record
Elements : Elements_Array (1 .. Capacity);
Last : Extended_Index := No_Index;
Busy : Natural := 0;
Lock : Natural := 0;
end record;
------------------------------------------------------------------
new Plain_Vector'(2, (Left, Right), Last => Last, others => <>)
Q1: How the "new" statement's arguments matches the type's parameter and record fields?
I can GUESS "2" matched "Capacity",
"(Left, Right)" matched "Elements",
"Last => Last" matched "Last"
"Others => <>" matched "Busy" and "Lock" to let them use default value.
But this is just a GUESS, are there any official grammar explanation on this?
Q2: What does the ' do? (in the "new" statement)
Is it an attribute or does it have any other meanings?
Where can I find a summary usage of "single quote mark" in Ada?
I spent long time trying to find out those information, but no luck.
Thank you in advance. Miles.
A: If you have a soft copy of the Ada Reference Manual, you can search for the ' character in the Syntax Summary (it's Annex P in the latest version I have; check the table of contents).
The ' character is used for:
*
*Character literals: 'x'
*Attribute references: Foo'Size
*Qualified expressions: Some_Type'(expression), Some_Type'Aggregate
It's also used in representation clauses (now called "aspect clauses"); these look a lot like attribute references: for Foo'Size use 32;.
And of course it can appear in a comment or in a string or character literal.
The example in the code you posted is a qualified expression.
Suggestion: In contexts other than character literals the character ' should probably be referred to as an apostrophe, since it's not acting as a quotation mark. For attributes and qualified expressions, it's sometimes pronounced "tick": I'd read Foo'Size as "foo tick size".
(And new is an expression, not a statement.)
A: You seem to be asking specifically about qualified expressions (Keith's third bullet in his answer).
In Ada, if you have two objects of different types you can attempt to convert between them by using the destination type's name like a function name, like so:
Foo : constant Integer := Integer (2.35);
Generally this only works if both types are numeric types, or if one is derived from the other (declared as type New_Type is new Old_Type ...).
The compiler will of course have to add code to verify that the value falls within any contraints the destination type may have. But this is very useful for simple type conversions.
However, when you are dealing with expressions, sometimes what you want isn't a conversion, but rather to tell the compiler what type to make the expression. No (runtime) code should be required to do this, just make the expression the type I tell you to.
Compilers can generally figure this out from context, but sometimes they can't. This is where that apostrophe comes in. It tells the compiler not to convert the expression to the specified type, but rather to create it as that type in the first place.
The most common use for this is when performing dynamic allocations, as shown in your example. Sometimes there may be other situations where it is needed though. One example might be when passing a literal value into an overloaded routine. Say you have two versions of the procedure My_Routine, one that takes in an Integer, and the other taking in a different custom integer type. If you pass objects into it, the compiler can just look at the object's type. However, if you pass in a literal 1, most likely you will get a compiler error that the expression is ambiguous.
You could solve this by putting your literal 1 into a constant integer and passing that in (then grumbling about your stupid compiiler). However, the easier thing to do is the following:
My_Routine (Integer'(1));
That resolves the ambiguity for your compiler. This isn't a "conversion", so no extra code is needed. You are just telling the compiler that the following expression is of type Integer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: What is the easiest way to turn an array of RGB values in C++ into an image file? I've been looking all over the net for a good, quick solution to this, and haven't found anything that has satisfied me yet. It seems like it should be trivial--just one or two calls to a function in some library and that's it--but that doesn't seem to be the case. libjpeg and libtiff lack good documentation and the solutions people have posted involve understanding how the image is produced and writing ~50 lines of code. How would you guys do this in C++?
A: The easiest way is to save it as a Netpbm image. Assuming that your array is packed into 24 bits per pixel with no padding between pixels, then you can write out a super-simple header followed by the binary data. For example:
void save_netpbm(const uint8_t *pixel_data, int width, int height, const char *filename)
{
// Error checking omitted for expository purposes
FILE *fout = fopen(filename, "wb");
fprintf(fout, "P6\n%d %d\n255\n", width, height);
fwrite(pixel_data, 1, width * height * 3, fout);
fclose(fout);
}
A: If you want "simple" over anything else, then have a look at stb_image_write.h.
This is a single header file, which includes support for writing BMP, PNG and TGA files. Just a single call for each format:
int stbi_write_png(char const *filename, int w, int h, int comp, const void *data, int stride_in_bytes);
int stbi_write_bmp(char const *filename, int w, int h, int comp, const void *data);
int stbi_write_tga(char const *filename, int w, int h, int comp, const void *data);
A: In my Graphin library you can find simple function:
bool EncodeJPGImage(image_write_function* pctl, void* streamPrm, unsigned width, unsigned height, unsigned char** rows, unsigned bpp, unsigned quality)
that does this conversion. See http://code.google.com/p/graphin/source/browse/trunk/src/imageio.cpp#412
The library: http://code.google.com/p/graphin/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: For each loop not working I've some master categories and user groups, user groups are assigned to master categories. I am using the following code but it only display the last record in the table while I need to display all matching records.
Function in controller:
function listdesignation($id) {
$res = $this->designation_model->GetDesignation($id);
$this->data['test'] = $res;
if ($res) {
foreach ($res as $row) {
$this->data['selected_designation'] = $this->designation_model->GetDesignation_Names($row['designation_id']);
}
if ($this->data['selected_designation']) {
$this->data['enable_view'] = true;
}
}
}
Function in model:
function GetDesignation_Names($id) {
$designation = array();
$this->db->select('*');
$this->db->from('delta_designation');
$this->db->where('designation_id', $id);
$query = $this->db->get();
if ($query->num_rows() > 0) {
foreach ($query->result() as $row) {
$designation[$row->designation_id]['designation_id'] = $row->designation_id;
$designation[$row->designation_id]['designation'] = $row->designation;
}
return $designation;
}
return false;
}
A: You're overwriting your variable with each iteration through your foreach loop. You probably want to build up an array instead.
E.g. instead of
foreach (...) {
$myvar = $row->field;
}
Try this:
$myresult = array();
foreach (...) {
$myvar = $row->field;
$myresult[]= $myvar; // (append to array)
}
Then you can:
foreach ($myresult as $onerow) {
...
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Create complex data from Strings in Scala I have defined a number of case classes such as
abstract class Foo
case class Bar(s: String) extends Foo
case class Baz(f: Foo) extends Foo
case class FooBar(l: Foo, r:Foo)
that allow me to create complex data, e.g.,
val x = FooBar(Bar("1"), Baz(Bar("2")))
I want to read these type of data from a string, such as
val x = what_to_do_here?("FooBar(Bar("1"), Baz(Bar("2")))")
In a dynamic language I would just call eval.
(Edit: I really do not want to call something like eval in scala)
The solution I came up with in scala was to write a parser. Is there a simpler way to do that?
A: You are assuming that there's a construct that's symmetric with toString. I'm pretty sure there isn't one.
Since what you're discussing is a classic serialization/deserialization scenario, you may want to look into a serialization library (one possibility that comes to mind is lift-json, which with I've had considerable success, but there are certainly alternatives). Either that, or I've completely missed your usage scenario :-)
A: You can use the scala interpreter to write your own eval function. Since the interpreter is actually a compiler, I don't think this will be very fast.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Is there a way for a spinner option to open another spinner? I want it where when someone clicks an option in a Spinner, it opens another spinner with more options. Also, is there a way for an "Other" option to open an EditText where someone can input their selection if theirs isn't available in the Spinner?
Example:
Spinner 1 has these options:
iOS
Android
And if they select iOS, another spinner comes up immediately where the options are all the iPhone versions. (i.e., titled "Which iPhone do you have?")
And if they select Android, it does the same thing, but with Android devices.
AND if their phone isn't on the second spinner, they type the model of their phone in.
How could I do this if I have the first spinner already in my code?
P.S., if needed, I can post the code for the first spinner, though it's pretty standard.
A: Basically build your second spinner programmatically depending on which option they choose. I'd add "other" to each second spinner. If they choose "other" then you can display the text box.
A: I hope this will be useful to you.
Try this Code...
public class MainActivity extends Activity {
Spinner sp1,sp2;
ArrayAdapter<String> adp1,adp2;
List<String> l1,l2;
int pos;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
l1=new ArrayList<String>();
l1.add("A");
l1.add("B");
sp1= (Spinner) findViewById(R.id.spinner1);
sp2= (Spinner) findViewById(R.id.spinner2);
adp1=new ArrayAdapter<String> (this,android.R.layout.simple_dropdown_item_1line,l1);
adp1.setDropDownViewResource(android.R.layout.simple_dropdown_item_1line);
sp1.setAdapter(adp1);
sp1.setOnItemSelectedListener(new OnItemSelectedListener() {
@Override
public void onItemSelected(AdapterView<?> arg0, View arg1,
int arg2, long arg3) {
// TODO Auto-generated method stub
pos=arg2;
add();
}
private void add() {
// TODO Auto-generated method stub
Toast.makeText(getBaseContext(), ""+pos, Toast.LENGTH_SHORT).show();
switch(pos)
{
case 0:
l2= new ArrayList<String>();
l2.add("A 1");
l2.add("A 2");
adp2=new ArrayAdapter<String>(MainActivity.this,
android.R.layout.simple_dropdown_item_1line,l2);
adp2.setDropDownViewResource(android.R.layout.simple_dropdown_item_1line);
sp2.setAdapter(adp2);
select();
break;
case 1:
l2= new ArrayList<String>();
l2.add("B 1");
l2.add("B 2");
adp2=new ArrayAdapter<String>(MainActivity.this,
android.R.layout.simple_dropdown_item_1line,l2);
adp2.setDropDownViewResource(android.R.layout.simple_dropdown_item_1line);
sp2.setAdapter(adp2);
select();
break;
}
}
private void select() {
// TODO Auto-generated method stub
sp2.setOnItemSelectedListener(new OnItemSelectedListener() {
@Override
public void onItemSelected(AdapterView<?> arg0, View arg1,
int arg2, long arg3) {
// TODO Auto-generated method stub
Toast.makeText(getBaseContext(), "Test "+arg2, Toast.LENGTH_SHORT).show();
}
@Override
public void onNothingSelected(AdapterView<?> arg0) {
// TODO Auto-generated method stub
}
});
}
@Override
public void onNothingSelected(AdapterView<?> arg0) {
// TODO Auto-generated method stub
}
});
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Android: How can I resize an image to be automatically fitted with the View I have png floor map images. I am using the below code to set the default image size to fit with the view.
imageView is the view inside my ImageSwitcher:
imageView.setImageMatrix(createDefaultMatrix());
private Matrix createDefaultMatrix() {
Matrix matrix = new Matrix();
matrix.setValues(new float[]{1.136f, 0.0f, -17.204117f,0.0f, 1.136f, 66.24078f,0.0f, 0.0f, 1.0f});
matrix.postScale(1.1085271f, 1.1085271f, 198.08646f, 304.4192f);
return matrix;
}
If you have notice, I am using fix values just to fit the image with the view.
Any guidance on how to do it automatically fitted against its View is appreciated.
A: why don't you scale images? try CENTER_INSIDE
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Moving Emails to Public Folder using Dynamic Paths In our Corporate environment we have a Mailbox (not the default inbox) with many sub folders. We also have a Public Folder which is an exact mirror of the Mailbox folder structure.
I am trying to detect the path of a selected email and move that email to its mirrored folder in the Public Folders.
I would say 95% of this code is correct but I am left with an Outlook error message "Can't move the items."
The code is supposed to do the following:
1. detects the current folder of the selected email(s)
2. converts the MAPIFolder into a path string
3. shortens the string to remove the root Mailbox directory structure
4. adds the remaining string onto the root directory structure of the public folder
5. converts the resulting path back into a MAPIFolder
6. move the selected email(s) to the mirrored folder in the Public Folders
Sub PublicFolderAutoArchive()
Dim olApp As Object
Dim currentNameSpace As NameSpace
Dim wipFolder As MAPIFolder
Dim objFolder As MAPIFolder
Dim pubFolder As String
Dim wipFolderString As String
Dim Messages As Selection
Dim itm As Object
Dim Msg As MailItem
Dim Proceed As VbMsgBoxResult
Set olApp = Application
Set currentNameSpace = olApp.GetNamespace("MAPI")
Set wipFolder = Application.ActiveExplorer.CurrentFolder
Set Messages = ActiveExplorer.Selection
' Destination root directory'
' Tried with both "\\Public Folders" and "Public Folders" .. neither worked
pubFolder = "\\Public Folders\All Public Folders\InboxMirror"
' wipFolder.FolderPath Could be any folder in our mailbox such as:
' "\\Mailbox - Corporate Account\Inbox\SubFolder1\SubFolder2"
' however, the \\Mailbox - Corporate Account\Inbox\" part is
' static and never changes so the variable below removes the static
' section, then the remainder of the path is added onto the root
' of the public folder path which is an exact mirror of the inbox.
' This is to allow a dynamic Archive system where the destination
'path matches the source path except for the root directory.
wipFolderString = Right(wipFolder.FolderPath, Len(wipFolder.FolderPath) - 35)
' tried with and without the & "\" ... neither worked
Set objFolder = GetFolder(pubFolder & wipFolderString & "\")
If Messages.Count = 0 Then
Exit Sub
End If
For Each itm In Messages
If itm.Class = olMail Then
Proceed = MsgBox("Are you sure you want archive the message to the Public Folder?", _
vbYesNo + vbQuestion, "Confirm Archive")
If Proceed = vbYes Then
Set Msg = itm
Msg.Move objFolder
End If
End If
Next
End Sub
Public Function GetFolder(strFolderPath As String) As MAPIFolder
' strFolderPath needs to be something like
' "Public Folders\All Public Folders\Company\Sales" or
' "Personal Folders\Inbox\My Folder"
Dim objApp As Outlook.Application
Dim objNS As Outlook.NameSpace
Dim colFolders As Outlook.Folders
Dim objFolder As Outlook.MAPIFolder
Dim arrFolders() As String
Dim I As Long
On Error Resume Next
strFolderPath = Replace(strFolderPath, "/", "\")
arrFolders() = Split(strFolderPath, "\")
Set objApp = Application
Set objNS = objApp.GetNamespace("MAPI")
Set objFolder = objNS.Folders.Item(arrFolders(0))
If Not objFolder Is Nothing Then
For I = 1 To UBound(arrFolders)
Set colFolders = objFolder.Folders
Set objFolder = Nothing
Set objFolder = colFolders.Item(arrFolders(I))
If objFolder Is Nothing Then
Exit For
End If
Next
End If
Set GetFolder = objFolder
Set colFolders = Nothing
Set objNS = Nothing
Set objApp = Nothing
End Function
Note: The mailbox above is just an example and is not the actual mailbox name. I used MsgBox to confirm the path string was being joined correctly with all appropriate back slashes and that the Right() function was getting what I needed from the source path.
A: I'm not sure, but should be something like?
set objApp = New Outlook.Application
instead of
set objApp = Application
A: From glancing at the code, it appears that your GetFolder() implementation doesn't like the double-backslash you're giving at the start of the path. There's even a comment indicating this at the start of the function. Try removing those two chars from the front of pubFolder.
Alternatively, you could alter GetFolder to permit them. A few lines like this should do the trick.
If Left(strFolderPath, 2) = "\\" Then
strFolderPath = Right(strFolderPath, Len(strFolderPath) - 2)
End If
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Optimize the SVG output from Gnuplot I've been trying to plot a dataset containing about 500,000 values using gnuplot. Although the plotting went well, the SVG file it produced was too large (about 25 MB) and takes ages to render. Is there some way I can improve the file size?
I have vague understanding of the SVG file format and I realize that this is because SVG is a vector format and thus have to store 500,000 points individually.
I also tried Scour and re-printing the SVG without any success.
A: The time it takes to render you SVG file is proportional to the amount of information in it. Thus, the only way to speed up rendering is to reduce the amount of data
I think it is a little tedious to fiddle with an already generated SVG file. I would suggest to reduce the amount of data for gnuplot to plot.
Maybe every or some other reduction of data can help like splitting the data into multiple plots...
A: I would recommend keeping it in vector graphic format and then choosing a resolution for the document that you put it in later.
Main reason for doing this is that you might one day use that image in a poster (for example) and print it at hundreds of times the current resolution.
I normally convert my final pdf into djvu format.
pdf2djvu --dpi=600 -o my_file_600.djvu my_file.pdf
This lets me specify the resolution of the document as a whole (including the text), rather than different resolutions scattered throughout.
On the downside it does mean having a large pdf for the original document. However, this can be mitigated against if you are latex to make your original pdf - for you can use the draft option until you have finished, so that images are not imported in your day-to-day editing of the text (where rendering large images would be annoying).
A: Did you try printing to PDF and then convert to SVG?
In Linux, you can do that with imagemagick, which you even be able to use to reduce the size of your original SVG file.
Or there are online converters, such as http://image.online-convert.com/convert-to-svg
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Rails Devise Help Can someone give me an example of how to do something like this.
I have a user setup with devise, according to the way in this tutorial
https://github.com/fortuity/rails3-subdomain-devise/wiki/Tutorial-%28Walkthrough%29 (I skipped the stuff related to subdomain)
now say if I wanted to have a user make many tweets, so a user and a user alone can upload tweets, but anyone can see them, how would that be accomplished.
I find rails really tough, so if you could explain thoroughly that would be good (i.e. go to routes.rb insert get 'users/ ...)
A: Devise is simply an authentication gem. If you want to post tweets to Twitter, you're going to have to get into omniauth with devise. Theres a railscast for that: http://railscasts.com/episodes/236-omniauth-part-2
And devise has a wiki on direct integration: https://github.com/plataformatec/devise/wiki/OmniAuth:-Overview
If you're talking more hypothetically about creating and showing posts then after you have devise set up, just make filters on what users can do like this
before_filter :authenticate, :only => [:new, :create, :edit, :destroy]
That would go in the Posts controller right under the class declaration. Basically it says before you load pages new, create, edit, or destroy make sure the user is authenticated by calling authenticate (I believe devise already has that authenticate method built in but if not it is easy to write). Then any person could go to the show method to see the tweets, but could not create them for that user.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: New MVC project I just need some guidance. I'm fairly new to MVC3 and to web development in general. I want to create my own project that will involve data storage/access (SQL server db) and user log in functionality. I'm not sure were to start, there seem to be so many ways for this and following tutorials doesn't really give the experience of setting up real life project (or its just my impression). Would you guys suggest starting from an empty MVC3 project, or perhaps use the scaffolding feature and build on top of it and use the membership provider built into it ? Some tutorials (by Scott Hanselman) actually explain how to copy and paste the membership feature into a new project. Is this approach suggested ? Any suggestions will help! Thanks!
A: Here are some tutorials
MVC 3 with razor
ASP.Net website
MVC Music Store
Strongly typed helpers in mvc
Brad Wilson's Blog
MVC Subdomain routing
MVC Partial Views
Some javascript for MVC
MVC Javascript frameworks
JSON and MVC
Form update using JSON in MVC
A: I'm just starting too, I found this very helpful, especially the videos on the left.
But what I found extremely helpful is the MVC Music Store and the Contoso University site (I recommend to download as PDF; you can see a button to do so at the top).
A: The NerdDinner tutorial is a great way to start with ASP.NET MVC, although the online version is slightly out of date. However, I can strongly recommend the book that this tutorial (in its updated version) is the first chapter of: Professional ASP.NET MVC 3.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: writing to SD card in android public void initCache(Context context)
{
this.context = context;
//Find the dir to save cached images
if (android.os.Environment.getExternalStorageState().equals(android.os.Environment.MEDIA_MOUNTED))
cacheDir=new File(android.os.Environment.getExternalStorageDirectory(),"myapp");
else
cacheDir=context.getCacheDir();
if(!cacheDir.exists())
cacheDir.mkdirs();
}
I have the right permission in manifest file and also I have th sd card installed. But in spite of this my app is writing to internal storage. I am using Motorola Atrix.
A:
Environment.getExternalStorageState() returns path to internal SD mount point like "/mnt/sdcard"
No, Environment.getExternalStorageDirectory() refers to whatever the device manufacturer considered to be "external storage". On some devices, this is removable media, like an SD card. On some devices, this is a portion of on-device flash. Here, "external storage" means "the stuff accessible via USB Mass Storage mode when mounted on a host machine", at least for Android 1.x and 2.x.
But the question is about external SD. How to get a path like "/mnt/sdcard/external_sd" (it may differ from device to device)?
Android has no concept of "external SD", aside from external storage, as described above.
If a device manufacturer has elected to have external storage be on-board flash and also has an SD card, you will need to contact that manufacturer to determine whether or not you can use the SD card (not guaranteed) and what the rules are for using it, such as what path to use for it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: AS3 not passing number by value, rather by reference? I have this:
for (var i:int = 0; i < 3; i++) {
var newChoice:MainButton = new MainButton(function(){
trace(this["func" + i])} );
}
public function func0 ...
public function func1 ...
public function func2 ...
(When clicked, MainButton calls the function in the argument)
However, I get func3, which I assume is do to it finding the value of i. But shouldn't it pass by value since it's a number? How do I get the wanted result? Thanks
A: You're not passing anything, except the function itself (which is passed by reference).
What's happening is that the function creates a closure around the variable i, changing its lifetime. When the anonymous function is called, i is still in its original scope, but the loop has already finished, leaving i at 3.
So, the closure is essentially keeping i in the scope of the function even after the original, declaring function has finished.
Instead of closing over the variable, you wanted to close over the variable's value at the time the function is created. You can achieve this with an intermediate variable that's set only once before being closed over:
for (var i:int = 0; i < 3; i++) {
var j = i; // New variable each time through the loop; closure will close over a different variable each time (that happens to have the same name)
var newChoice:MainButton = new MainButton(function(){
trace(this["func" + j])} );
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Make a Buttonistener in a second layout How would I implement a Buttonlistener for a second layout which is still be called in the main Acitivity?
I already tried it by a named Button listener and via an anonymous. But still get nullpointer Exceptions.
Code:
back = (Button) findViewById(R.id.backToMain);
if(back != null)
back.setOnClickListener( new View.OnClickListener() {
public void onClick(View view) {
setLayout(R.layout.main);
}
});
Layout.xml
<Button android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:id="@+id/backToMain"
android:text="Zurück">
</Button>
A: Try this
Change the button attr tag
<Button android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:id="@+id/backToMain"
android:text="Zurück"
android:onClick="goBack">
</Button>
In your activity create a method
public void goBack(View v) {
//Write code here
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Zend DB Select version of a simple Mysql Left Join I have a simple Left query on mysql
SELECT sp. * , p.name, p.date_created
FROM subpages sp
LEFT JOIN pages p ON p.id = sp.page_id
I dont know how to execuite this query from zend framework.
I have a Mapper Page, which access a DbTable page which is extending Zend_Db_Table_Abstract. I read some articles so i suppose statement should look something like this from the mapper
$select = $this -> DbTable() -> select();
$select -> joinleft(..... This is what I dont know how to write....);
$results = $this -> DbTable() -> fetchAll($select);
if($results) { return $result; }
In the article, $select -> from() were used, that where my mind is stuck , why do we need to write $select -> from("subpages") when It will already be defined in the DbTable page.
How to write the join statement properly?
A: From the query you posted , you can use the following
(You have not mentioned table name for ailas 'mi' so I have taken it as 'your_table_name' )
$select = $this -> DbTable() -> select()->from(array("your_table_name" => 'mi');
$select -> joinleft(array("pages" => 'p'),"p.id = mi.page_id");
$results = $this -> DbTable() -> fetchAll($select);
if($results) { return $result; }
Hope it will work for you mrN.
A: If you want to use the select with multiple tables, you need to get it from the table adapter, which is a Zend_Db object (and you'll need to specify your table in the from() method).
So, you would need something like the following:
$select = $this->DbTable()->getAdapter()->select();
$select->from(array("mi" => "tableName"));
$select->joinLeft(array("p" => "pages"), "p.id = mi.page_id");
(etc)
Hope that helps,
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Administrate facebook application via graph api I was wondering if it is possible to change your site url or canvas url using the graph api. I have admin rights to my application and I can change it via the gui but I would prefer to be changing it via a curl.
Thanks
A: You need to use the old REST method admin.SetAppProperties
https://developers.facebook.com/docs/reference/rest/admin.setAppProperties/
At the moment FB has not yet moved that method yet to the OpenGraph but you can still call REST methods via the Graph API (As shown in the console in the page above).
Even then you cannot change the secure_canvas and secure_tab url setting. FB is apparently working on make those settings as part of the API as far as I heard.
A: Here is example:
$properties = array(
"callback_url" => '',
"installable" => 1,
"profile_tab_url" => "",
"tab_default_name" => "",
'tos_url' => '',
'privacy_url' => ''
);
$facebook = new Facebook(array('appId'=>'YOUR_APP_ID','secret' = > '..', ...));
$facebook->api(array(
'method'=>'admin_setAppProperties',
'properties'=>json_encode($properties))
);
Here is a list of the properties you can set: http://developers.facebook.com/docs/appproperties/
There is also admin.getAppProperties more info you can find here: http://developers.facebook.com/docs/reference/rest/admin.getAppProperties/
Using Facebook SDK
A: You can now change app properties via Graph API. You can make POST request, using the application id in the path, and use the app access token as the access token, and pass in key/value pairs as POST variables for properties you'd like to modify.
https://developers.facebook.com/docs/reference/api/application/ has a list of the application properties that you can modify with the Graph API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Rails 3 array - add items with category then list items in category As part of my Rails 3 app, I want the User to be able to click on links on other profiles/pages and have the string value of the link be added to an array belonging to that User's profile.
Specifically, what I am looking to do is populate a list of :todos for each profile depending on which todo they click. The idea is that each todo will fall within one of two categories: inside and outside. So clicking the links will push the value of the todo to either inside or outside. Then the User's profile will display a list of :todos inside and outside, and count the total of todos for that User's profile.
Since I'm a beginner to programming, I got some help here on SO about setting this up; however I need some help finishing it. I can't quite seem to connect all the dots. I've set up a join model but am not able to add the todo's string value, then list/count it in the profile. Here is my code:
profile.rb Model:
class Profile < ActiveRecord::Base
belongs_to :user
accepts_nested_attributes_for :user
has_many :profile_todos
has_many :todos, :through => :profile_todos
def add_todo_inside_item(item)
self.profile_todos.build :category => 'inside'
end
def add_todo_outside_item(item)
self.profile_todos.build :category => 'outside'
end
end
todo.rb Model:
class Todo < ActiveRecord::Base
belongs_to :profile
end
profile_todo.rb Model:
class ProfileTodo < ActiveRecord::Base
belongs_to :profile
belongs_to :todo
end
_create_todos.rb Migration:
class CreateTodos < ActiveRecord::Migration
def self.up
create_table :todos do |t|
t.string :name
t.timestamps
end
end
_create_profile_todos.rb Migration:
class CreateProfileTodos < ActiveRecord::Migration
def self.up
create_table :profile_todos do |t|
t.string :category
t.integer :profile_id
t.timestamps
end
end
Listing the todos in a User's Profile:
<div id="todos">
<p><%= @profile.first_name %> has <%= pluralize(@profile.profile_todos.count, 'goal') %>.</p>
<div id="todo-list">
<div class="todos_inside">
<p><%= @profile.profile_todos(:category => 'inside' %>.</p>
</div>
<div class="todos_outside">
<p><%= @profile.profile_todos(:category => 'outside' %>.</p>
</div>
</div>
</div>
add_item to @profile.todos:
<li><%= link_to "#", :class => 'button white' do %><%= @user.profile.stringtoadd %><% end %></li>
A: As @socjopata mentioned, you're going to need some sort of controller action to manage the creation and building of your ProfileTodo records. Since you already created a ProfileTodo join model, go ahead and create a ProfileTodosController.
More on controllers:
http://guides.rubyonrails.org/action_controller_overview.html
Your link_to tag should then make a remote call to the create action. In order to get everything to work properly, you'd most likely need to supply the controller with both the profile_id and topic_id in order to make the correct RESTful transaction, which means you'll have to supply a parameter hash to your link_to tag, which can get kinda messy if you use the url_options.
Look at passing in url_options:
http://api.rubyonrails.org/classes/ActionView/Helpers/UrlHelper.html#method-i-link_to
Ultimately, you are creating a new ProfileTodo record, so I think it would be a better description of the work being done if you used a form_for tag that has hidden fields for the profile_id and the topic_id. You can also make forms remote in rails by supplying them with :remote => true
Assuming you make an accompanying controller and add RESTful resources to your config/routes.rb file, your form for each individual topic would look something like this:
<%= form_for(profile_todo_path, :remote=> true) do |f| %>
<%= f.hidden_field_tag :profile_id, :value => @user.profile.id %>
<%= f.hidden_field_tag :topic_id, :value => topic.id %>
<%= f.submit_tag topic.name %>
<% end %>
You can always style your forms, so if you only want a link to be displayed, that should be doable :)
A: You rather need a remote link pointing to some action, so you can manipulate your todos without refreshing the page. From the docs:
:remote => true - This will allow the unobtrusive JavaScript driver to
make an Ajax request to the URL in question instead of following the
link. The drivers each provide mechanisms for listening for the
completion of the Ajax request and performing JavaScript operations
once they’re complete
As for the other question, you can define scopes you'll use for displaying and counting objects:
http://api.rubyonrails.org/classes/ActiveRecord/NamedScope/ClassMethods.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Using let in .hs file I'm using Notepad++ and WinGHCi to do some homework and I have to define a little "database". The format is arbitrary and I don't think that's where I'm going wrong. Anyway, here's what I'm using in a *.hs file:
let studentDB = [
("sally", ["cpsc110", "cpsc312", "cpsc204"]),
("jim", ["cpsc110", "cpsc313"]),
("bob", ["cpsc121", "cpsc303", "cpsc212"]),
("frank", ["cpsc110", "cpsc212", "cpsc204"]),
("billy", ["cpsc312", "cpsc236"]),
("jane", ["cpsc121"]),
("larry", ["cpsc411", "cpsc236"]) ]
WinGHCi gives me this error: a1.hs:118:1: parse error (possibly incorrect indentation)
I tried messing tabbing the tuples over or and placing my list brackets on different lines but couldn't get anything to work. I thought something smaller would help me track the bug so I did this instead:
let s = []
But that gave me the same error. Is this an indentation error, maybe due to some quirky Notepad++ behavior? Or is my Haskell wrong? Thanks.
A: I imagine you're thinking that the contents of a *.hs file are like what you can type into ghci. That's incorrect. When you're typing into ghci you're effectively typing into a do block. So the following syntax is correct:
main = do
let s = []
-- do more stuff
However, at the top level of a *.hs file, things are different. The let construct is actually
let s = [] in
codeThatReferencesS
If you want to define a top-level binding, just say
s = []
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: SQL Prompt alternatives for SQL Server 2005? I'm trying to wean myself from SQL Prompt from Redgate because I'm now on a low low budget and I can't afford to go from 4 to 5. I haven't found anything nearly as good for SQL Server 2005.
Should I just save up, or has someone found a better tool or a way to exist without the intellisense features??
A: You can try out free SSMS and VS add-in, ApexSQL Complete
ApexSQL Complete provides snippets and syntax checking and a lot of code auto-complete features (context based predicting keywords, users, objects, autocompleting aliases, the Insert procedure parameters automatically option , the Insert full INSERT statement option and so on)
For code refactoring you could use ApexSQL Refactor, also a free add-in
Disclaimer: I work for ApexSQL as a Support Engineer
A: DevArt has a similar tool called SQL Complete which seems to support SQL Server 2005, and even offers a free Express edition.
A: Database.NET is basic, but pretty decent:
http://fishcodelib.com/Database.htm
Free for personal use, $19 commercial license, and works with SQL Server, Access (what I use it for, when I'm forced to deal with Access databases), and a number of others.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: DotNetOpenAuth setting the popup to do a post instead of a get I was able to get the DotNetOpenAuth set up in my MVC3 project. (Took a long time, but finally was able to get all the missing pieces.)
However, now I'm getting a 414 error from Google that the request URL is too long. I found that google is saying in those cases the request should be a post and the issue would be gone. I was wondering if there was a way to construct post instead of a get when the popup windows are being pre-loaded?
I'm using nerddinner as my stepping stone, and have used the code in the AuthController code to get the url's preloaded. http://nerddinner.codeplex.com/SourceControl/changeset/view/70027#952619
Is it possible to do a post in those popups? Or how did people get around the 414 error for Google?
A: DotNetOpenAuth already 'upgrades' long URLs from GET to POST, and does so at the 2048 character length threshold. There have occasionally been reports that this threshold is too high, and it is adjustable by web.config file setting. However, I don't think that adjusts the AJAX .js file on the client (yet).
If you will please file a ticket describing this problem, we can get a maintenance release of DotNetOpenAuth out that resolves this issue.
FYI the .js file is found in the DotNetOpenAuth project source code and is called OpenIdRelyingPartyControlBase.js. But since it compiles as a resource into the dotnetopenauth.dll and downloads to the web browser directly from there, it makes it inconvenient for you to fix with an adjusted threshold.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7564256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.